Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Unanswered
Hey Team, I Had A Question About Executing The Pipelines Using Clearml Agent Setup In K8S. When We Define A Pipeline Script To Be Executing Remotely And Submit It To The Queue For Execution We See Following Things Happening


From what we have seen, In order to submit the pipeline to ClearML.
Clearml processes the pipeline script locally and submits it to the queue. What happens locally is that is creates a controller task (task that orchestrates the pipeline I guess) and records the arguments of the script that needs to execute as part of this task i.e the pipeline script

Now once it is submitted to the queue, a new worker pod is spin up that continues this controller task (that was created in ClearML when the script was processes locally on my machine) and processes the entire script again with the same captures argument.
Is there a particular reason for it to process it twice, once locally on my machine and another remotely in the worker pod. When it processes locally, it can identify the pipeline and the steps so why does it need to do it remotely. Is this purely for orchestration reasons?

  
  
Posted 4 months ago
86 Views
0 Answers
4 months ago
4 months ago