@<1529271085315395584:profile|AmusedCat74> , wow that's an impressive find! Did you stumble on this mentioned by someone or did you figure it yourself?
I suggest running it in docker mode with a docker image that already has cuda installed
Can you provide an example snippet?
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , what version of clearml-agent
are you using? Can you provide a full log of the run?
Hi FancyWhale93 , task.data.created
should be good. The UI uses this parameter to show in the UI so there shouldn't be an issue. You can also try task.data.started
Try creating a new version and syncing the local folder (or try to specifically add files) 🙂
JitteryCoyote63 , doesn't seem to happen to me. I'll try raising a clean server and see if this happens then. You're running with 1.2, correct?
Reproduces for me as well. Taking a look what can be done 🙂
I think you can simply reset and enqueue the task again for it to run. Question is, why did it fail?
Are you running the HPO example? What do you mean by adding more parameter combinations? If the optimizer task finished you either need a new one or to reset the previous and re-run it.
You can do various edits while in draft mode
You can restore these tasks by copying or moving them from task__trash into task collection. But the events for these tasks cannot be restored. About the user who deleted them unfortunately ClearML does not record this info in Mongo and without logging to ES there is no place to retrieve it (I can suggest using Kibana to monitor ES). You can try to inspect the mongo collection url_to_delete. It contains all the links from the deleted tasks that should be removed from the fileserver. If you se...
Hi @<1654294828365647872:profile|GorgeousShrimp11> , long story short - you can.
Now to delve into it a bit - You can trigger entire pipeline runs via the API.
I can think of two options from the top of my head. First being some sort of "service" task running constantly and listening to something and then triggering pipeline runs.
The second, some external source sending an POST request via API to trigger a pipeline.
What do you think?
You can view all projects and search there 🙂
Of course :)
You can select tasks in different projects in table view or you can add experiments to an existing compare
Hi @<1523701062857396224:profile|AttractiveShrimp45> , I think this is currently by design. How would you suggest doing multiple metric optimization - priority between metrics after certain threshold is met?
DistressedGoat23 , how are you running this hyper parameter tuning? Ideally you need to have
` From clearml import Task
task = Task.init() `
In your running code, from that point onwards you should have tracking
I think so, yes. You need a machine with a GPU - this is assuming I'm correct about the n1-standard-1
machine
@<1556812486840160256:profile|SuccessfulRaven86> , I think this is because you don't have the proper permissions 🙂
Now try logging in
@<1523704089874010112:profile|FloppyDeer99> , can you try upgrading your server? It appears to be pretty old version.
When looking at the user in MongoDB, is it some special user or just something regular?
What version of python is the agent machine running locally?
Does it supporttorch == 1.12.1
?
Hi @<1523701132025663488:profile|SlimyElephant79> , are you running both from the same machine? Can you share the execution tab of both pipeline controllers?
Also, reason that they in queued state is because no worker is picking them up. You can control the queue to which every step is pushed and I think by default they are sent to the 'default' queue
DeliciousBluewhale87 , I believe so, yes 🙂
Hi @<1523702000586330112:profile|FierceHamster54> , do you mean you changes the policy during upload?
Does it go back to working if you revert the changes?
This is the default image. I guess it doesn't have the python version you need to run with.
Hi PerfectMole86 ,
how do I connect it to clearml installed outside my docker container?
Can you please elaborate?
Hi DepressedFish57 ,
But authentificate by log/pass is disable on host side.
Can you please clarify?
By separate ssh key - do you mean a different git password?