Hi UnevenDolphin73 , is there a specific setting you're looking for?
Hi @<1613344994104446976:profile|FancyOtter74> , I think this is cause because you're creating a dataset in the same task. Therefor there is a connection between the task and the dataset and they are moved to a special folder for datasets. Is there a specific reason why you're creating both a Task & Dataset in the same code?
Hi @<1570220844972511232:profile|ObnoxiousBluewhale25> , I see that there is also no force
flag for the SDK. Maybe open a github request to add the option to force or just allow deleting archived published tasks.
Currently through code you can use the API to delete those experiments.
OK, there appears to be a github issue relating this:
https://github.com/allegroai/clearml/issues/388
I was right regarding encountering this before. People have asked for this feature and I think it appears to be a priority to add as a feature.
You can circumvent auto logging with the following:task = Task.init(..., auto_connect_frameworks={'pytorch': False})
However you will need to log other models manually now. More information is in the github issue π
DepressedChimpanzee34 , this has been reported and should be solved in one of the following versions π
Hi @<1523701504827985920:profile|SubstantialElk6> , thanks for the heads up π
Hi TroubledHedgehog16 , I don't think there is any specific documentation regarding this. Basically anything that communicates with the server (UI/SDK/Agent) will cause an increase in these calls.
You could do a test on a free account using your resource to see how many calls you would reach in a peak day.
Hi LethalCentipede31 , I don't think there is an out of the box solution for this but saving them as debug samples sounds like a good idea. You can simply report them as debug samples and that should also work π
I think AnxiousSeal95 updates us when there is a new version or release π
Hi @<1724960468822396928:profile|CumbersomeSealion22> , what was the structure that worked previously for you and what is the new structure?
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , I don't think there are any login credentials for mongodb by default in the OS
And if you switch back to 1.1.2 in the setup that 1.1.1 worked, does it still fail?
I think I misunderstood your problem at the start. let me take another look π
If you're running on GCP I think using the autoscaler is a far easier and also cost efficient solution. The autoscaler can spin up and down instances on GCP according to your needs.
I meant that maybe you ran it with a newer version of the SDK
Hi ObedientToad56 , this value appears in your clearml.conf
and needs to be changed on the machine running the agent
It is reported as a plot, not an artifact π
Although I think a problem would be syncing the databases on different servers
Hi @<1597762318140182528:profile|EnchantingPenguin77> , do you have a code snippet that reproduces this? Where is that API call originating from?
Currently the UI will give you the timeline up to back a month ago for the usage of workers etc. If you want to go 3 months back and get specifics you'd have to get it directly from the API and extrapolate data yourself
Hi @<1739455977599537152:profile|PoisedSnake58> , you can run the agent in docker mode as long as the image is available on your machine. You can also use clearml-agent build
, please see more here - None
I'm assuming your dictionary is made from non basic types (like object of a sort)
What do you have inside this dict?
Hi @<1523701553372860416:profile|DrabOwl94> , do you see any errors in the elastic?
You added two logs, one with docker the other without. Each stopped on a different step. Is that consistent? What OS is the agent running on? Also what is the command you're using to run the agent?
Hi @<1679299603003871232:profile|DefeatedOstrich25> , you mean you're on the community server? Do you see any sample datasets in the Datasets section?
The agent is basically a daemon process that is sitting on any machine and is capable of running jobs. You can set it up on any machine you would like but the machine has to be turned on...
For example, in the response of tasks.get_by_id
you get the data in data.tasks.0.started
anddata.tasks.0.completed
I hope this helps π
This can be a bit of a problem as well since not all packages for 3.8 have the same versions available for 3.6 for example. It's recommended to run on the same python versions OR have the required python version installed on the remote machine
In the UI, you can edit the docker image you want to use. You can then choose an image with the needed python pre-installed