Hi GleamingGrasshopper63
How well can the ML Ops component handle job queuing on a multi-GPU server
This is fully supported 🙂
You can think of queues as a way to simplify resources for users (you can do more than that,but let's start simple)
Basicalli qou can create a queue per type of GPU, for example a list of queues could be: on_prem_1gpu, on_prem_2gpus, ..., ec2_t4, ec2_v100
Then when you spin the agents, per type of machine you attach the agent to the "correct" queue.
Interested in the differences between Enterprise and community. Who would I talk with?
Well, I'm not sure I'm the guy for that, but I think the gist is, enterprise (or paid) adds security / permissions, and expands data management layer (basically adding a query layer to the datasets , just like DB only with versioning and links to files). Obviously hosting, support etc, but I guess that is given