Hi TartBear70 ,
Did you run the experiment locally first? What versions of clearml/clearml-agent are you using?
SubstantialElk6 , Hi 🙂
In the UI do you get ubuntu:20:04
as the docker container for the experiment?
Hi @<1545216070686609408:profile|EnthusiasticCow4> , start_locally()
has the run_pipeline_steps_locally
parameter for exactly this 🙂
Hello CurvedHedgehog15 , I don't think there is such an option. You can however add metrics over a completed task.
Also, did you run the experiment inside the docker? Just masking sure 🙂
Hi @<1542316991337992192:profile|AverageMoth57> , I think this is what you're looking for 🙂
None
Yes, but this data is managed by mongodb. Also since you have full visibility at the user/passwords you can probably somehow generate a token similiar to how the UI does it when you log in to create a token
It's basically the paid version of ClearML. It is built towards larger teams with many services offered by the ClearML team to make your & your user's lives easier and provides additional features often required by large teams
then yeah, all data sits in /opt/clearml/data
Although I think a problem would be syncing the databases on different servers
Are you using the community server or are you using the open source and self hosting?
The DataOps feature will abstract your usage of data - None
Hi RotundSquirrel78 , can you try clearing local cache? For me everything is showing properly
You need to separate the Task object itself from the code that is running. If you're manually 'reviving' a task but then nothing happens and no code is running then the task will get aborted eventually. I'm not sure I understand entirely what you're doing but I have a feeling you're doing something 'hacky'.
VivaciousPenguin66 , the resource you have to access in Azure portal is "access keys" under "Security + networking". Input that key under "SECRET/SAS" in profile page
Hi @<1739818374189289472:profile|SourSpider22> , can you provide a full log of the run?
Hi @<1707203455203938304:profile|FoolishRobin23> , not sure I understand. Are you setting something in addition to the services agent in the docker compose?
Hi @<1671689442621919232:profile|ItchyDuck87> , did you manage to register directly via the SDK?
Can you please elaborate a bit on your setup and what you're trying to achieve?
So I think I'm missing something. What is the point of failure?
ClearML tries to detect the packages you used during the code execution. It will then try to install those packages when running remotely.
Hi @<1762286410452176896:profile|ExcitedFrog68> , if you open dev tools (F12) do you see any console errors?
I think you need to provide the app pass for github/butbucket instead of your personal password
No it wouldn't since something would actually be going on and the python script haven't finished
Hi @<1539780258050347008:profile|CheerfulKoala77> , it seems that you're trying to use the same 'Workers Prefix' setup for two different autoscalers, workers prefix must be unique between autoscalers
Hi @<1649946171692552192:profile|EnchantingDolphin84> , what about this example?
None
Add argparser to change the configuration of the HyperParameterOptimizer class.
What do you think?
@<1533619725983027200:profile|BattyHedgehong22> , it appears from the log that it is failing to clone the repository. You need to provide credentials in clearml.conf
I'm assuming your dictionary is made from non basic types (like object of a sort)
What do you have inside this dict?
How did you configure the files_server in clearml.conf
?
I see. Leave the files_server section as it was by default. Then in the CLI specify the --output-uri
flag
None
SkinnyPanda43 , did Gabriel's tip help?