Hi Alon,
Thanks! I know that already. I am more looking for a solution to spin up the Docker container automatically without having to manually log into each one of them and start a clearml-agent from there.
Sorry for the confusion.
🤔 Hmm, yes, I suppose I can do that.
Ah, yes, I found the Dockerfile on the clearml-agent already. Should be doable!
Thanks for the suggestion!
Hi TimelyPenguin76
Both. The agent has to run inside a container and it will spin up sibling containers to run the tasks.
For future reference, there's actually an easier way.
The entrypoint of the Docker container accepts CLEARML_AGENT_EXTRA_ARGS. So adding CLEARML_AGENT_EXTRA_ARGS=--queue new_queue_name --create-queue to your environment let's it work with the default clearml-agent image.
Unfortunately, nowhere to be found in the documentation, but you can see it in the repository: https://github.com/allegroai/clearml-agent/blob/master/docker/agent/entrypoint.sh
can you build your own docker image with clearml-agent installed in it?
Thanks StrongHorse8
Where do you think would be a good place to put a more advanced setup? Maybe we should add an entry for DevOps? Wdyt?
Hi StrongHorse8 , you want to run the agent inside a container or the agent to run your task in docker mode?
Ho StrongHorse8 ,
Yes, each clearml agent can listen to a different queue and use a specific GPU, you can view all the use cases and example in this link https://clear.ml/docs/latest/docs/clearml_agent/#allocating-resources
👍 great, so if you have an image with clearml agent, it should solve it 😀
I guess you are using an on prem server and not cloud one (aws for example)