Reputation
Badges 1
606 × Eureka!Hi KindChimpanzee37 I was more asking about the general idea to make these settings task-specific, but thank you for the suggestion anyways, I will definitely apply it.
Maybe this opens up another question, which is more about how clearml-agent is supposed to be used. The "pure" way would be to make the docker image provide everything and clearml-agent should do not setup at all.
What I currently do instead is letting the docker image provide all system dependencies and let clearml-agent setup all the python dependencies. This allows me to reuse a docker image for more different experiments. However, then it would make sense to have as many configs as possib...
Also clearml-agent at version 1.5 does not look for nightly at the correct indexes even of torch_nightly set to true in clearml.conf
Looking in indexes:
https://pypi.org/simple ,
https://download.pytorch.org/whl/cu117/
However, I have not yet found a flexible solution other than ssh-agent forwarding.
But it is not related to network speed, rather to clearml. I simple file transfer test gives me approximately 1 GBit/s transfer rate between the server and the agent, which is to be expected from the 1Gbit/s network.
AgitatedDove14 fyi I think this is the issue I have: https://stackoverflow.com/a/65526944/3038183
I am currently on the Open Source version, so no Vault. The environment variables are not meant to used on a per task basis right?
Depends on how you start the task afaik. I think clearml-task uses requirements.txt
by default, but otherwise clearml will parse your files dependencies or if you changed in clearml.conf it will use your conda/pip environment to generate the requirements.
Wait, nvm. I just tried it again and now it worked.
btw: Could you check whether agent.package_manager.system_site_packages
is true
or false
in your config and in the summary that the agent gives before execution?
I start my agent in --foreground
mode for debugging and it clearly show false
, but in the summary that the agent gives before the task is executed, it shows true
.
When I passed specific arguments (for example --steps) it ignored them...
I am not sure what you mean by this. It should not ignore anything.
What you mean by "Why not add the extra_index_url to the installed packages part of the script?"?
Tested with clearml-agent 1.0.1rc4/1.2.2 and clearml 1.3.2
Alright, thanks. Would be a nice feature 🙂
When I change the owner and the group of the files to root
it works.
clearml will register conda packages that cannot be installed if clearml-agent is configured to use pip. So although it is nice that a complete package list is tracked, it makes it cumbersome to rerun the experiment.
Yep, I will add this as an issue. Btw: Should I rather post the kind of questions I am asking as an issue or do they fit better here?
How can I get the agent log?
Thank you! I agree with CostlyOstrich36 that is why I meant false sense of security 🙂
Thank you SuccessfulKoala55 so actually only the file-server needs to be secured.
That seems to be the case. After parsing the args I run task = Task.init(...)
and then task.execute_remotely(queue_name=args.enqueue, clone=False, exit_process=True)
.
Well, I guess no hurdles vs. safety is inherently no solvable. I am all for hurdles, if it is clear how to overcome it. And in my opinion referring to clearml-init
is something which makes sense from a developer and a user perspective.
Anyways, from my google search it seems that this is not something that is intuitive to fix.
Is there any progress on this: https://github.com/allegroai/clearml-agent/issues/45 ? This works on all my machines 🙂
Python 3.8.8, clearml 1.0.2