Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Unanswered
How Can I Run A New Version Of A Pipeline, Wait For It To Finish And Then Check Its Completion/Failure Status? I Want To Kick Off The Pipeline And Then Check Completion


Sorry, I think something’s got lost in translation here, but thanks for the explanation.

Hopefully this is clearer:

  • Say we have a new ClearML pipeline as code on a new commit in our repo.
  • We want to build and run this new pipeline and have it available on the ClearML Server.
  • We want to run a suite of tests that validate/verify/etc the performance of this entire ClearML Pipeline, e.g. by having it run on a set of predefined inputs and checking the various artifacts that were created – these checks will be made from outside the ClearML Pipeline itself, i.e. we require the Pipeline Run to have been stored on ClearML and we require a Task ID to get the info.
  • If validation is passed, we want to mark the ClearML Pipeline run, e.g. via a tag, as having passed the tests, such that our automated systems know how to find the correct ClearML Pipeline to clone for future runs.
    When you mention model verification and training, you are talking about those running within the pipeline. This is not the desired behaviour, as the Pipeline itself is the product and therefore the Pipeline itself needs, as a whole, to be validated/verified!
  
  
Posted one year ago
86 Views
0 Answers
one year ago
one year ago