Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Unanswered
Hi, I Have A Question About The Pipeline, Especially About The Parallelism Part. We Are Considering Implementing A Use Case And Are Interested In Knowing Whether It Can Be Efficiently Managed Using Clearml Pipeline. Our Use Case Involves A Dataset That


Hi Jason, yes this can be done. Your pipeline code will look like this:

Execution of preprocessing task

for i in range(125):
Execution of data splitting and inference task(s); each of the 125 tasks have the same base task name but different names, e.g. name = "inference_task" + str(i)
<end loop>

ids = ["${inference_task_" + str(i) + ".id}" for i in range(125)]
Execution of aggregation task with the ids passed in as some part of parameter_override e.g. "General/inference_ids": '[' + ','.join(ids) + ']', as a string that can be processed in the task script itself.

Let me know if you have any further questions; thanks!

  
  
Posted 6 months ago
62 Views
0 Answers
6 months ago
6 months ago