Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Happy Friday Everyone ! We Have A New Repo Release We Would Love To Get Your Feedback On

Happy Friday everyone !
We have a new repo release we would love to get your feedback on πŸš€
πŸŽ‰ Finally easy FRACTIONAL GPU on any NVIDIA GPU 🎊
Run our nvidia-cuda flavor containers and get driver level memory limit ! ! !
Which means multiple containers on the same GPU will not OOM each other!
Let us know what you think - None
You can test now with ✨

docker run -it --rm --gpus 0 --pid=host clearml/fractional-gpu:u22-cu12.3-2gb nvidia-smi

notice that

nvidia-smi

inside the container reports total of only 2GB instead of your maximum GPU memory
full list of containers / mem limit in the

  
  
Posted one month ago
Votes Newest

Answers 10


@<1524922424720625664:profile|TartLeopard58> @<1545216070686609408:profile|EnthusiasticCow4>
Notice that when you are spinning multiple agents on the same GPU, the Tasks should request the "correct" fractional GPU container, i.e. if they pick a "regular" no mem limit.
So something like

CLEARML_WORKER_NAME=host-gpu0a clearml-agent daemon --gpus 0 clearml/fractional-gpu:u22-cu12.3-2gb
CLEARML_WORKER_NAME=host-gpu0b clearml-agent daemon --gpus 0 clearml/fractional-gpu:u22-cu12.3-2gb

Also remeber to add --pid=host to your conf file extra_docker_arguments
None

  
  
Posted one month ago

@<1545216070686609408:profile|EnthusiasticCow4>

Is there currently a way to bind the same GPU to multiple queues? I believe the agent complains last time I tried (which was a bit ago)

run multiple agents on the same GPU,

CLEARML_WORKER_NAME=host-gpu0a clearml-agent daemon --gpus 0
CLEARML_WORKER_NAME=host-gpu0b clearml-agent daemon --gpus 0
  
  
Posted one month ago

How does it work with k8s?

You need to install the clearml-glue and them on the Task request the container, notice you need to preconfigure the clue with the correct Job YAML

  
  
Posted one month ago

How does it work with k8s? how can I request the two pods to sit on the same gpu?

  
  
Posted one month ago

AMAZING!

  
  
Posted one month ago

Is there currently a way to bind the same GPU to multiple queues? I believe the agent complains last time I tried (which was a bit ago).

  
  
Posted one month ago

That's great! I look forward to trying this out.

  
  
Posted one month ago

is it in the OSS version too?

  
  
Posted one month ago

@<1535069219354316800:profile|PerplexedRaccoon19>

is it in the OSS version too?

Yep, free of charge ❀

  
  
Posted one month ago

I’m also curious if it’s available to bind the same GPU to multiple queues.

  
  
Posted one month ago
101 Views
10 Answers
one month ago
one month ago
Tags