but I want to change what is shown by the GUI so that would need to be a setting on the server itself?
Can you please elaborate?
We use the GCP SDK under the hood. Can you try downloading to the same folder in the NFS using GCP SDK?
Also can you provide the full log for better context?
You can read up on the caching options in your ~/clearml.conf
You can make virtualenv creation a bit faster
Hi @<1523703012214706176:profile|GorgeousMole24> , I'm not sure about the exact definition, but I think when the script finishes running or the thread that started Task.init() finishes.
It looks like you can't access the file due to permissions. And in the agent run there is no such file. How are you storing it and how are you trying to fetch it in code during the agent run?
Hi @<1547028031053238272:profile|MassiveGoldfish6> , the expected behavior would be pulling only the 100 files 🙂
Why do you manually use set_repo ?
Please do 🙂
What do you mean by signature?
Hi @<1790190274475986944:profile|UpsetPanda50> , Optuna has an internal mechanism for early stopping
AbruptWorm50 , the guys tell me that it's under progress and we will be updated in the following minutes 🙂
@<1644147961996775424:profile|HurtStarfish47> , you also have the auto_connect_frameworks parameter of Task.init do disable the automatic logging and then manually log using the Model module to manually name and register the model (and upload ofc)
Hi @<1546303277010784256:profile|LivelyBadger26> , when you run the agent, you can specify which CPUs to use using the --gpus argument like you used
Hi, can you provide the full log?
Hi GrittyCormorant73 ,
Did you define a single queue or multiples?
@<1707203455203938304:profile|FoolishRobin23> , the agent in the docker compose is a services agent and it's not for running GPU jobs. I'd suggest running the clearml-agent with the GPU manually.
Yes, I think that would be the best solution.
Can you add a full log of an experiment?
Hi @<1623491856241266688:profile|TenseCrab59> , are you self deployed? Can you provide some logs/screenshots? If you go directly into the task information of each step there console is empty?
Do you have a way to see which docker images you have locally (if they weren't removed) to see on which version you were previously?
I'm not sure. But you can access the clearml.conf file through code
Hi @<1717350310768283648:profile|SplendidFlamingo62> , you can basically export the same plots from a model and do that in the report. Or am I missing something?
Hi @<1882599179692281856:profile|FriendlyBluewhale89> I think its support@clearml.ai , it should be on the website 🙂
I mean what python version did you initially run it locally?
Hi @<1523701304709353472:profile|OddShrimp85> , I would suggest looking at the examples here:
None
Hi @<1791277437087125504:profile|BrightDog7> , do you have a code snippet that reproduces this? Is it for all plots or only specifics?
Hi @<1523701295830011904:profile|CluelessFlamingo93> , I think you would need to expose those configurations through the pipeline controller and then the tasks would take those configurations and override them with what you inserted into the controller.
Makes sense?

