Hi ReassuredTiger98 !
I'm not sure the above will work. Maybe I can help in another way though: when you want to set agent.package_manager.system_site_packages = true
does that mean you have a docker container with some of the correct packages installed? In case you use a docker container, there is little no real need to create a virtualenv anyway and you might use the env var CLEARML_AGENT_SKIP_PIP_VENV_INSTALL=1
to just install all packages in the root environment.
Because every task gets its own new clean docker container, there is no problem with using the root env. The nice thing is that in that way you get the system packages + any other ones that are installed by the Task.
Was that the outcome you meant? If so, please let me know when you tested it if it works for you 🙂 In the meantime we can think about your idea of making agent.package_manager.system_site_packages
task-specific.
I am currently on the Open Source version, so no Vault. The environment variables are not meant to used on a per task basis right?
no this should work with this one. I'll double check if I'm remembering it correctly but I thought you should be able to start a task after loading your own configuration object, where can set the agent.package_manager.system_site_packages = true
.
Okay, no worries. I will check first. Thanks for helping!
It should, or you might need to nest the objects.
Edit: I asked, it won't there's a difference in configs I mixed up.
Maybe this is something that is only possible with the vault of the enterprise version?
I mean if I do CLEARML_DOCKER_IMAGE=my_image
clearml-task something something
it will not work, right?
Or you can just load a config file or object: https://clear.ml/docs/latest/docs/references/sdk/task/#connect_configuration
So if understand correctly, something like this should work?
task = Task.init(...) task.connect_configuration( {"agent.package_manager.system_site_packages": False} ) task.execute_remotely(queue_name, clone=False, exit_process=True)
Hi KindChimpanzee37 I was more asking about the general idea to make these settings task-specific, but thank you for the suggestion anyways, I will definitely apply it.
I think you can set this code wise as well - https://clear.ml/docs/latest/docs/references/sdk/task#taskforce_requirements_env_freeze
ReassuredTiger98 you can set different parameters per task:
https://clear.ml/docs/latest/docs/clearml_sdk/task_sdk#configuration
Or you can give it a configuration object: https://clear.ml/docs/latest/docs/clearml_sdk/task_sdk#configuration-objects
Hi TimelyMouse69
Thank you for answering, but I do not think these methods do allow me to modify anything the is set in clearml.conf. Rather they just do logging.
ReassuredTiger98 anything in the configuration file can be overruled 🙂
https://clear.ml/docs/latest/docs/configs/configuring_clearml
Maybe this opens up another question, which is more about how clearml-agent is supposed to be used. The "pure" way would be to make the docker image provide everything and clearml-agent should do not setup at all.
What I currently do instead is letting the docker image provide all system dependencies and let clearml-agent setup all the python dependencies. This allows me to reuse a docker image for more different experiments. However, then it would make sense to have as many configs as possible task-specific or task-overridable.
Is this not something completely different?
This will just change the way to local repository is analyzed, but nothing about the agent.