Huh, what an interesting issue! I think you should open a github issue for this to be followed up on.
If you remove the tags the page resizes back?
@<1556812486840160256:profile|SuccessfulRaven86> , did you install poetry inside the EC2 instance or inside the docker? Basically, where do you put the poetry installation bash script - in the 'init script' section of the autoscaler or on the task's 'setup shell script' in execution tab (This is basically the script that runs inside the docker)
It sounds like you're installing poetry on the ec2 instance itself but the experiment runs inside a docker container
PanickyMoth78 , if I'm not mistaken that should be the mechanism. I'll look into that π
Hi AbruptHedgehog21 , what are you trying to do when you're getting this message? Are you running a self hosted server?
@<1576381444509405184:profile|ManiacalLizard2> , why not run it as docker compose?
UnevenDolphin73 , I've encountered a similar issue with s3. I believe it's going to be fixed in the next release π
Can you try going to <HOST>/login
Can you check up on the dockers and see if they're all up and running?
I think this is referring to your configuration file ~/clearml.conf follow the instructions in the message to remove it or you can just ignore it
It's unrelated. Are you running the example and no scalers/plots are showing?
Hmm maybe @<1523701087100473344:profile|SuccessfulKoala55> might have an idea
I think it is fixed in 1.9.2 which should be released early next week
Is that what you're looking for?
Hi ReassuredArcticwolf33 , what are you trying to do and how is it being done via code?
Hi DilapidatedDucks58 , what is your server version?
What about network? Does something return 400 or something of the sort?
Hi JitteryCoyote63 , I don't believe this is possible. Might want to open a GitHub feature request for this.
I'm curious, what is the use case? Why not use some default python docker image as default on agent level and then when you need a specific image put into the experiment configuration?
RoughTiger69 , you can also use Task.add_requirements Β for a specific package through the script
Example: Task.add_requirements('tensorflow', '2.4.0') Example: Task.add_requirements('tensorflow', '>=2.4') Example: Task.add_requirements('tensorflow') -> use the installed tensorflow version Example: Task.add_requirements('tensorflow', '') -> no version limit
Hi @<1674226153906245632:profile|PreciousCoral74> , you certainly can, just use the Logger module π
None
Hi @<1547028031053238272:profile|MassiveGoldfish6> , do you have any idea what might have caused the project to become hidden?
You can "unhide" the project via API, there is a system tag "hidden" that you can remove to unhide
I think this is due to Optuna itself. It will manually kill experiments it doesn't think will have good results
Can you provide a full log of the VM when spun manually vs when by an autoscaler? Also I'd try spinning up manually a VM and then running an agent manually on it and see if the issue reproduces
Hi RotundHedgehog76 , from API perspective I think you are correct
AttractiveShrimp45 , can you please open a GitHub issue to follow on this please?
How did you name the alternative clearml.conf file?
You can see my answer in channel