Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hello Everyone, I Would Like To Know What Your Projects Are In Terms Of The Usage Of Clearml Pipelines? What Are Your Most Elaborate Pipelines? So Far, I Am Using "Only" A Pipeline That Looks Like This:

Hello everyone, I would like to know what your projects are in terms of the usage of ClearML pipelines? What are your most elaborate pipelines? So far, I am using "only" a pipeline that looks like this:

  • data reading and reformatting
  • data preparation (mostly just train-test-spllit)
  • ML training (using train data, output is a model)
  • evaluation (using test data from 2 and the model)
    I am sure, some of you have more sophisticated setups. Do tell! 🙂
  
  
Posted 11 months ago
Votes Newest

Answers 4


"using your method you may not reach the best set of hyperparameters."

Of course you are right. It is an efficiency trade-off of speed vs effectiveness. Whether this is worth it or not depends on the use-case. Here it is worth it, because the performance of the modelling is not sensitive to the parameter we search for first. Being in the ball-park is enough. And, for the second set of parameters, we need to do a full grid search (the parameters are booleans and strings); thus, this would drive the cost regarding repetition high.

cleanly split codebase into components with clear responsibilities

I agree and it was my first instinct as well. However, I am not sure this type of separation of concerns should be done on the level of ClearML if speed is a consideration. ClearML has quite a bit of overhead cost (in terms of runtime) for each pipeline component. I have looked into Kedro for implementing separation of concerns, but I am not yet sure how to combine Kedro with ClearML yet, as there is no official support from either of the other.

What do you think?

  
  
Posted 11 months ago

Sounds interesting. But my main concern with this kind of approach is if the surface of the (hparam1, hparam2, objective_fn_score) is non-convex, using your method you may not reach the best set of hyperparameters. Maybe try using smarter search algorithms, like BOHB or TPE if you have a large search space, otherwise, you can try to do a few rounds of manual random search, reducing the search space around the region of most-likely best hyperparameters after every round.

As for why structure your code using pipelines, I come from a somewhat heavy software engineering background, so for me a cleanly split codebase into components with clear responsibilities is the best thing, and caching is just a nice addition 🙂

  
  
Posted 11 months ago

@<1537605940121964544:profile|EnthusiasticShrimp49> : The biggest advantage I see to split your code into pipeline components is caching. A little bit structuring your code, but I was told by the staff this should not one's main aim with ClearML components. What is your main take away for splitting your code into components?

My HPO on top of the pipeline is already working 🙂 I am currently experimenting on using the HPO in a (other) pipeline that creates two HPO steps (from the same function!) to first optimize in one direction of the parameter space and then in the others; the reason for this is to save time, because a full search would take forever.

  
  
Posted 11 months ago

Hey @<1523704157695905792:profile|VivaciousBadger56> , I was playing around with the Pipelines a while ago, and managed to create one where I have a few steps in the begining creating and ClearML datasets like users_dataset , sessions_dataset , prefferences_dataset , then I have a step which combines all 3, then an independent data quality step which runs in parallel with the model training. Also, if you want to have some fun, you can try to parametrize your pipelines and run HPO on an entire pipeline.

  
  
Posted 11 months ago
494 Views
4 Answers
11 months ago
11 months ago
Tags