Thank you for your response. It works. I want to run several workers simultaneously on the same GPU, because I have to train several, relatively simple and small, neural networks. It would be faster to train several of them at the same time on the same GPU, rather than do it consequently.
ExcitedSeaurchin87 , I think you can differentiate them by using different worker names. Try using the following environment variable when running the command: CLEARML_WORKER_NAME
I wonder, why do you want to run multiple workers on the same GPU?
Hi ExcitedSeaurchin87 ,
How are you trying to run the agents? Also, are you trying to run multiple agents on the same GPU?
Yes, I would like to run several agents on the same GPU. I use command python -m clearml_agent daemon --queue default queue_name --docker --gpus 0 --detached
Makes perfect sense, just careful not to run out of memory or it'll crash everything 😛