Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Unanswered
I Saw Some Talk Of Clearml + Kedro On Reddit. Is That A Good Approach?


AgitatedDove14 . HollowKangaroo16 have you two had any further success on the kedro/clearml front?

I have been looking into this as well. The impression I have so far is that clearml is similar to mlflow just on steroids because it provides additional capabilities around orchestration and experimentation.

AgitatedDove14
Kedro in my opinion is a really nice tool to keep a clean code base for building complex Data Science projects (consisting of one or more pipelines). The UI is really secondary to the abstractions/separation of concerns they provide which are the really powerful components in my opinion. From my point of view kedro/cleaml could be used together in several ways:
clearml tracking of experiments run through kedro (similar to tracking with mlflow) clearml tracking and deployment of whole workflows designed in kedroI think the challenge here is to pick the right abstraction matching. E.g. should a node in kedro (which usually is one function but can also be more involved) be equivalent to a task or should a pipeline be a task?

Kedro projects/pipelines can be already deployed to argo workflows/ airflow / databricks and some others for execution so adding clearml would be really interesting.

I am writing a small plugin for kedro/clearml atm that tries to link up kedro with clearml. Would be interesting to share experience and get input from the clearml people at some point.

The really interesting things arise when you run part of the pipelines in kedro on a local machine or within a clearml agent and keep a good record of those.

Also is it good practice to reuse task_ids when running the same job twice during debugging or always create a new one. A lot of questions 😉

If anyone is interested in exploring this more let me know!

  
  
Posted 3 years ago
135 Views
0 Answers
3 years ago
one year ago
Tags