Hi @<1541592227107573760:profile|EnchantingHippopotamus83> , to "clean" a task, you need to reset it. Resetting a task will purge all outputs
DilapidatedDucks58 , regarding internal workings - MongoDB - all experiment objects are saved there. Elastic - Console logs, debug samples, scalars all is saved there. Redis - some stuff regarding agents I think
I think that once a version has been finalized you can't add changes to it directly. You could probably hack something with setting it manually to running via the API, add the relevant connections and then move it to completed
LethalCentipede31 , it appears we had an internal issue with a load balancer, it was fixed a couple of minutes after your comment 🙂
BTW - did the agent print out anything? Which version of clearml-agent are you using?
I would suggest structuring everything around the Task object. After you clone and enqueue the agent can handle all the required packages / environment. You can even set environment variables so it won't try to create a new env but use the existing one in the docker container.
Hi @<1718799873618219008:profile|FunnyPeacock68> , you can set this up in clearml.conf
of the running agent
None
Hi @<1715900788393381888:profile|BitingSpider17> , you can run the agent in --debug
mode and this should pass it over to the internal agent running the code
Hi @<1715900788393381888:profile|BitingSpider17> , you need to set it in the environment where you are running the agent. Basically export it as an env variable and then run the agent
Hi @<1719524669695987712:profile|ClearHippopotamus36> , what if you manually add these two packages to the installed packages section in the execution tab of the experiment?
HappyDove3 Hi 🙂
Well, since all the data is logged you can simply use the API to retrieve it and create the tables yourself quite easily!
Hi @<1691983266761936896:profile|AstonishingOx62> , I'm not sure I understand what you're trying to do. You have some python code unrelated to ClearML. Does it run without issues? Did you afterwards add Task.init()
to that code?
You can export it in the same shell you run the agent in and that should work for example
export FOO=bar clearml-agent daemon ...
Hi @<1729309137944186880:profile|GrittyBee73> , models are unique objects in the system so each one of them has a unique ID. By default they will be named the same. However, you can add versioning on top in any way that you want. You can either add tags or even add metadata on top of them and then add custom columns according to this metadata so you can filter by versions.
What do you think?
Hi @<1603198163143888896:profile|LonelyKangaroo55> , you can change the value of files_server in your clearml.conf
to control it as well.
Hi @<1625666182751195136:profile|MysteriousParrot48> , I'm afraid that this looks like a pure ElasticSearch issue, I'd suggest checking on ES forums for help on this
Hi @<1581454875005292544:profile|SuccessfulOtter28> , using the most metrics you mean how much metric space it takes?
Archiving doesn't remove anything, but once archived, you can delete experiments to free up space
Hi @<1709015393701466112:profile|ScatteredPeacock14> , please open a GitHub issue for this to follow up on!
Hi @<1739455977599537152:profile|PoisedSnake58> , in the log you have the location of the cloned repo printed out.
For CLEARML_AGENT_EXTRA_PYTHON_PATH
you need to provide it with a path
Hi @<1749965229388730368:profile|UnevenDeer21> , I think this is what you're looking for
None
Hi @<1749965229388730368:profile|UnevenDeer21> , can you add the log of the job that failed?
Also, note that you can set these arguments from the webUI on the task level itself as well, Execution tab and then container section
Hi @<1717350332247314432:profile|WittySeal70> , where are the debug samples stored? Have you recently moved the server?
Hi @<1717350310768283648:profile|SplendidFlamingo62> , you can basically export the same plots from a model and do that in the report. Or am I missing something?
Hi BoredHedgehog47 , You need to addfrom clearml import Task task = Task.init(project_name='examples', task_name='hello world')
to your code and run it once after you've ran clearml-init
Regarding connect_configuration()
, reading into the docs I see that this method needs to be called before reading the config file
https://clear.ml/docs/latest/docs/references/sdk/task#connect_configuration
The webUI is using the API to show everything so I would suggest opening developer tools (F12) and seeing what is being sent by the UI when you navigate different sections of the experiments to use as a baseline
Hi @<1710827340621156352:profile|HungryFrog27> , you can do that. You would want to export everything you want using the API and then recreate and populate all the tasks you want using the API.
See here - None
Can you try with the latest version of the server?
I don't think there is such an option currently but it does make sense. Please open a GitHub feature request for this 🙂