Hi MoodyCentipede68 ,
What version of ClearML / ClearML-Agent are you using? Is it a self hosted server or the SaaS?
Also, can you explain what step 7 was trying to do? Is it running locally or distributed?
Hi and welcome 🙂
You want to add some plots to the dataset task?
Hi ElegantCoyote26 , I don't think so. I'm pretty sure the AWS AMI's are released for the open source server 🙂
It is returned in queues.get_all. I'd suggest navigating to the webUI and checking what the webUI is sending to the server (It's all API calls) and replicating that in code with the APIClient
It's an interesting question!
I think that the instances are terminated when they are spun down therefor the behavior should be the same as if you terminated them yourself manually
VirtuousFish83 Hi 🙂
What versions are you running with? ClearML, ClearML-Agent, Torch, Lightning. Which OS are they run on and with what python version.
Do you maybe have a snippet to play around with to try and reproduce the issue?
Hi RotundHedgehog76 , from API perspective I think you are correct
Hi @<1523701295830011904:profile|CluelessFlamingo93> , I'm afraid there is no clear-cut way to migrate data from the community server to your own self hosted server since the databases aren't compatible.
One work around would be to pull all experiments information via API (The structure/logs/metrics) and then repopulate the new server using the API. I think it would be a bit cumbersome but it can be achieved.
JuicyFox94 , can you please assist? 🙂
That sounds like a good idea! Can you please open a GitHub issue to track this?
Hi @<1523701304709353472:profile|OddShrimp85> this is the way I do it:
None <BUCKET>/
Hi UnevenDolphin73 , it does run that danger, however it will spin down after time out if there is nothing for it to pick up from the queue
Well if you save it as an artifact, that artifact is accessible by other tasks and passable via the pipeline with monitor_artifacts
parameter in add_step()
https://clear.ml/docs/latest/docs/references/sdk/automation_controller_pipelinecontroller#add_step
DepravedSheep68 , do you mean when registering your data?
Hi @<1594863230964994048:profile|DangerousBee35> , I'm afraid that the self hosted version and the PRO versions are entirely disconnected. There are many more advanced features in the Scale/Enterprise licenses where you can have a mix of all the features you might be looking for. You can see the different options here - None
I would look for an AMI that has already cuda and all drivers installed. Same goes for the docker image
Hi @<1523701949617147904:profile|PricklyRaven28> , note that steps in a pipeline are special tasks with hidden system tag, I think you might want to enable that in your search
Is it possible to do something so that the change of the server address is supported and the pictures are pulled up on the new server from the new server?
Do the links point to a bucket or the fileserver?
DepressedChimpanzee34 which section are you referring to, can you provide a screenshot of what you mean?
Hi @<1582179661935284224:profile|AbruptJellyfish92> , how do the histograms look when you're not in comparison mode?
Can you provide a self contained snippet that creates such histograms that reproduce this behavior please?
It needs to be in the base task
I'll try and see if it reproduces on my side, thanks! 🙂
I'm afraid not, as it would still require a data merge.
What code did you try running? It appears that "services" is the default queue in code. You can create this queue and run an agent against it to execute tasks
Hi FreshKangaroo33 ,
I think you could use a special hyper parameter for it. This way you can have it show up in the UI as a column. This way it would take any argument you want AND you can filter by it 🙂
Usage quote is calculated a few times a day. The new stats should be reflected in a few hours