Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Let’S Imagine I’M Building A Pipeline With Five Consecutive Steps, Where Some Of The Steps Are Non Ml/Dl Based. Using Clearml I Run A Lot Of Experiments To Find The Right Pipeline Configuration. After I Found The Right Algorithms And Parameters For My Pip

Let’s imagine I’m building a pipeline with five consecutive steps, where some of the steps are non ML/DL based. Using ClearML I run a lot of experiments to find the right pipeline configuration. After I found the right algorithms and parameters for my pipeline steps, I want to use the pipeline in production. Since I use multiple steps and have non ML/DL steps, I can’t use clearml-serve or any other model deployment. Is there a way to run all pipeline steps, not in isolation but consecutive in the same environment? For a production environment, it wouldn’t work to spin up a ClearML Agent per step and wait for pulling code and uploading artifacts. Just too slow! Basically, I’m looking for something that merges multiple Tasks together and allows local artifact passing without the ClearML Server in-between.

Posted 2 years ago
Votes Newest

Answers 2

without the ClearML Server in-between.

You mean the upload/download is slow? What is the reasoning behind removing the ClearML server ?

ClearML Agent per step

You can use the ClearML agent to build a socker per Task, so all you need is just to run the docker. will that help ?

Posted 2 years ago

Hi ClumsyElephant70

s there a way to run all pipeline steps, not in isolation but consecutive in the same environment?

You mean as part of a real-time inference process ?

Posted 2 years ago
2 Answers
2 years ago
one year ago