Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi, I Have A Question About Queue Management Of Clearml Agents. I Am Still A Beginner To Clearml And Still Discovering The Potential It Has And As Of Now It Has Amazed Me With It Versatile Features

Hi, I have a question about queue management of ClearML Agents. I am still a beginner to ClearML and still discovering the potential it has and as of now it has amazed me with it versatile features 😄 . I currently have an agent with 4 GPUs and want to configure queues with quad, dual, and single GPUs so I can choose depending on my workload. I might be wrong, but it seems like ClearML does not monitor GPU pressure when deploying a task to a worker rather rely only on its configured queues. Is it possible to configure the queues so that when a quad-GPU queue is being used for a task that other queues wait as the resource is busy (same goes for the dual-GPU queue)?

  
  
Posted 2 years ago
Votes Newest

Answers 2


Hi UpsetBlackbird87

I might be wrong, but it seems like ClearML does not monitor GPU pressure when deploying a task to a worker rather rely only on its configured queues.

This is kind of accurate, the way the agent works is that you allocate a resource for the agent (specifically a GPU), then sets queues (plural) to listen to (by default priority ordered). Then each agent is individually pulling jobs and running on the allocated GPU.
If I understand you correctly, you want multiple agents on the same GPUs?
There is no limit on resources so you can have multiple agents "sharing" the same resource, but you have to make sure you are not launching two Tasks can run simultaneously.

Is it possible to configure the queues so that when a quad-GPU queue is being used for a task that other queues wait as the resource is busy (same goes for the dual-GPU queue)?

Actually this is fully supported, the sad news this is only supported in the paid tier 😞 . Usually this is kind of "enterprise" feature, for customers with DGX machines etc.
That said you can always move Tasks between jobs and manually stop them, which means that unless you have a huge load you can always switch manually, if that makes sense

  
  
Posted 2 years ago

Thx for your response Martin.

  
  
Posted 2 years ago
491 Views
2 Answers
2 years ago
one year ago
Tags