Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
We Have A Use Case Where An Experiment Consists Of Multiple Docker Containers. For Example, One Container Works On Cpu Machine, Preprocesses Images And Puts Them Into Queue. The Second One (Main One) Resides On Gpu Machine, Reads Tensors And Targets From

we have a use case where an experiment consists of multiple docker containers. for example, one container works on CPU machine, preprocesses images and puts them into queue. the second one (main one) resides on GPU machine, reads tensors and targets from the queue, and updates weights of the network. one workaround would be to create a script which creates two linked experiments for these jobs and puts them into different ClearML queues. are there better ways to achieve the same goal?

Posted one year ago
Votes Newest

Answers 6

yeah, we've used pipelines in other scenarios. might be a good fit here. thanks!

Posted one year ago

As long as your clearml-agents have access to the redis instance it should work! Cool usecase though, interested to see how well it would work 🙂

Posted one year ago

I’ll let you know 😃

Posted one year ago

Pipelines! 😄

ClearML allows you to create pipelines, with each step either being created from code or from pre-existing tasks. Each task btw. can have a custom docker container assigned that it should be run inside of, so it should fit nicely with your workflow!

Youtube videos:

Relevant Documentation:

Custom docker container per task:

You can also override the docker container it should use by using an override in the pipeline controller

Posted one year ago

yes, this is the use case, I think we can use smth like Redis for this communication

Posted one year ago

After re-reading your question, it might be difficult to have cross-process communication though. So if you want the preprocessing to happen at the same time as the training and the training to pull data from the preprocessing on the fly, that might be more difficult. Is this your usecase?

Posted one year ago
6 Answers
one year ago
one year ago