Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
We Have A Use Case Where An Experiment Consists Of Multiple Docker Containers. For Example, One Container Works On Cpu Machine, Preprocesses Images And Puts Them Into Queue. The Second One (Main One) Resides On Gpu Machine, Reads Tensors And Targets From

we have a use case where an experiment consists of multiple docker containers. for example, one container works on CPU machine, preprocesses images and puts them into queue. the second one (main one) resides on GPU machine, reads tensors and targets from the queue, and updates weights of the network. one workaround would be to create a script which creates two linked experiments for these jobs and puts them into different ClearML queues. are there better ways to achieve the same goal?

  
  
Posted 2 years ago
Votes Newest

Answers 6


After re-reading your question, it might be difficult to have cross-process communication though. So if you want the preprocessing to happen at the same time as the training and the training to pull data from the preprocessing on the fly, that might be more difficult. Is this your usecase?

  
  
Posted 2 years ago

As long as your clearml-agents have access to the redis instance it should work! Cool usecase though, interested to see how well it would work 🙂

  
  
Posted 2 years ago

yeah, we've used pipelines in other scenarios. might be a good fit here. thanks!

  
  
Posted 2 years ago

Pipelines! 😄

ClearML allows you to create pipelines, with each step either being created from code or from pre-existing tasks. Each task btw. can have a custom docker container assigned that it should be run inside of, so it should fit nicely with your workflow!

Youtube videos:
https://www.youtube.com/watch?v=prZ_eiv_y3c
https://www.youtube.com/watch?v=UVBk337xzZo

Relevant Documentation:
https://clear.ml/docs/latest/docs/pipelines/

Custom docker container per task:
https://clear.ml/docs/latest/docs/references/sdk/task#set_base_docker

You can also override the docker container it should use by using an override in the pipeline controller

  
  
Posted 2 years ago

yes, this is the use case, I think we can use smth like Redis for this communication

  
  
Posted 2 years ago

I’ll let you know 😃

  
  
Posted 2 years ago
1K Views
6 Answers
2 years ago
one year ago
Tags