Hi DilapidatedDucks58 ,
ClearML supports dynamic gpus allocation as part of the paid version - https://clear.ml/docs/latest/docs/clearml_agent#dynamic-gpu-allocation
can this help?
feature request:
we have several servers with multiple GPUs, and atm we have to manually check which GPU has enough memory before queuing each experiment into the right queue. it would be cool if we could set required GPU memory parameter for each experiment, and ClearML then would queue it according to the currently available resources
Hi DilapidatedDucks58 ,
ClearML supports dynamic gpus allocation as part of the paid version - https://clear.ml/docs/latest/docs/clearml_agent#dynamic-gpu-allocation
can this help?