🤔 Hmm, yes, I suppose I can do that.
Ah, yes, I found the Dockerfile on the clearml-agent already. Should be doable!
Thanks for the suggestion!
Hi StrongHorse8 , you want to run the agent inside a container or the agent to run your task in docker mode?
Ho StrongHorse8 ,
Yes, each clearml agent can listen to a different queue and use a specific GPU, you can view all the use cases and example in this link https://clear.ml/docs/latest/docs/clearml_agent/#allocating-resources
I guess you are using an on prem server and not cloud one (aws for example)
For future reference, there's actually an easier way.
The entrypoint of the Docker container accepts CLEARML_AGENT_EXTRA_ARGS. So adding CLEARML_AGENT_EXTRA_ARGS=--queue new_queue_name --create-queue to your environment let's it work with the default clearml-agent image.
Unfortunately, nowhere to be found in the documentation, but you can see it in the repository: https://github.com/allegroai/clearml-agent/blob/master/docker/agent/entrypoint.sh
👍 great, so if you have an image with clearml agent, it should solve it 😀
Hi Alon,
Thanks! I know that already. I am more looking for a solution to spin up the Docker container automatically without having to manually log into each one of them and start a clearml-agent from there.
Sorry for the confusion.
Hi TimelyPenguin76
Both. The agent has to run inside a container and it will spin up sibling containers to run the tasks.
Thanks StrongHorse8
Where do you think would be a good place to put a more advanced setup? Maybe we should add an entry for DevOps? Wdyt?
can you build your own docker image with clearml-agent installed in it?