Unanswered
On A Related Line But More Complicated: How Can We Ask The Autoscaler To Queue, Say, N Jobs On An N-Gpu Machine, Please? For Example, On Aws, Nvidia A100 Gpus Are Only Available On Instances With 8X A100, Which Is Overkill For A Single-Gpu Job, So Might A
My understanding may be bad. Say I have a single EC2 instance. Is that instance only able to handle one task at a time?
Or can I start multiple instances of the clearml-agent
process on it and then have one task per agent?
And if that's the case, can we have multiple agents on the EC2 instance listening to the same queue, e.g. default
. Or would this only work if they were listening to different queues?
188 Views
0
Answers
one year ago
one year ago