Hi @<1710827340621156352:profile|HungryFrog27> , can you provide a full log of the task?
Hi @<1715900788393381888:profile|BitingSpider17> , I think this is what you're looking for - None
Also in network section of developer tools. What is returned to one of the 400 messages?
Try setting the following environment envs:%env CLEARML_WEB_HOST= %env CLEARML_API_HOST= %env CLEARML_FILES_HOST= %env CLEARML_API_ACCESS_KEY=... %env CLEARML_API_SECRET_KEY=...and try removing the clearml.conf file 🙂
ScaryBluewhale66 , Hi 🙂
You would need to install ClearML-Agent to run it
Hi @<1523708920831414272:profile|SuperficialDolphin93> , simply set output_uri=/mnt/nfs/shared in Task.init
@<1808672054950498304:profile|ElatedRaven55> , what if you manually spin up the agent on the manually spun machine and then push the experiment for execution from there?
Can you add a snippet of how you're presenting/generating the matplotlibs?
And also exactly what command line you used to run the agent?
You will need to find the appropriate docker image with the python version you're looking for.
And the experiments ran on agents or locally (i.e pycharm/terminal/vscode/jupyter/...)
Are you running a self deployed server? What is the version if that is the case?
Hi @<1523701295830011904:profile|CluelessFlamingo93> , I think you need this module as a part of repository, otherwise, how will the pipeline know what code to use?
YummyLion54 , let me take a look 🙂
Hi @<1582904448076746752:profile|TightGorilla98> , can you check on the status of the elastic container?
ClearML has a built in model repository so together I think they make a "feature store" again, it really depends on your definition
I think in this case you can fetch the task object, force it into running mode and then edit whatever you want. Afterwards just mark it completed again.
None
Note the force parameter
Hi @<1752139552044093440:profile|UptightPenguin12> , for that you would need to use the API and use the mark_completed call with the force flag on
Hi @<1523702786867335168:profile|AdventurousButterfly15> , are the models logged in the artifacts section?
The one sitting in the repository
let me know if it changes anything. Of course rerun the agent afterwards
Hi GrittyCormorant73 , can you please add the error you're getting? What version of ClearML are you using?
ApprehensiveSeahorse83 , also try with Task.init(..., output_uri = "<GS_BUCKET>")
Can you try running it via agent without the docker?
Hi @<1655744373268156416:profile|StickyShrimp60> , do you have any code that can reproduce this behavior?
I'm reading on task.set_credentials at the moment. What exactly are you trying to do?
It's a way to execute tasks remotely and even automate the entire process of data pre processing -> training -> output model 🙂
You can read more here:
https://github.com/allegroai/clearml-agent