Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hello, I Have A Question, Is It Possible To Create Multiple Train-Agent Per Gpu? I See Cases Of Multiple Gpu'S Per Agent On The Git Page But I'M Wondering If It'S Possible To Have Multiple Agents Share A Gpu To Better Utilize My Gpu Resource. Not Sure If

Hello, I have a question, is it possible to create multiple train-agent per gpu? I see cases of multiple gpu's per agent on the git page but I'm wondering if it's possible to have multiple agents share a gpu to better utilize my gpu resource. Not sure if I set things up wrong or something, but when i run the jupyter notebook hyperparameter_search in frameworks/pytorch/notebooks/image, everything runs sequentially waiting for the gpu to free up. Would be great if I can run multiple instances on a single gpu. Thanks!

  
  
Posted 4 years ago
Votes Newest

Answers 3


Thanks I've tried this out and this seems to work. I guess I just have to make sure that total memory usage of all parallel processes are not higher than my gpu's memory.

  
  
Posted 4 years ago

Hi CooperativeFly2

is it possible to create multiple train-agent per gpu

Yes you can, that said memory cannot be actually shared between GPU processes (GPU time is obviously shared) so you have to be careful with the Tasks actually being executed in parallel.

For instance:
TRAINS_WORKER_NAME=host_a trains-agent daemon --gpus 0 --queue default TRAINS_WORKER_NAME=host_b trains-agent daemon --gpus 0 --queue default

  
  
Posted 4 years ago

I guess I just have to make sure that total memory usage of all parallel processes are not higher than my gpu's memory.

Yep, unfortunately I'm not aware of any way to do that automatically 🙂

  
  
Posted 4 years ago