if you have 2 agent serving the same queue and then send 2 task to that queue, each agent should take one task
But if you queue sequentially one task then wait until that task to finish and queue the next: then it will be random which agent will take the task. Can be the same on from the previous task
Are you saying that you have 1 agent running task, 1 agent sitting idle while there is a task waiting in the queue and no one is processing it ??
I could not find any documentation for this support
same node multiple GPU working but.. diffrent nodes GPU is not balancing the workload
clear my doubt.. actually i tried .. i created queue name called test. i attached 2 GPU from diffrent machine.. but workload is going only one GPU.. this is my problem now.. slurm is not happening.
Hi @<1561885921379356672:profile|GorgeousPuppy74> , ClearML does support running with multiple GPUs
How.. it will support.. when i have 20G+20G in diffrent machine.. how can i run 40G workload