Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Disclaimer, Not Exactly A Clearml Question. Just Wasn'T Getting A Response When Asked In The Other Channel. Anyway Here It Goes. This Is Not Tool Specific. More Of A General Mlops Question. Given You Have A Classification Model That You Plan To Do Ct On

Disclaimer, Not exactly a clearml question. Just wasn't getting a response when asked in the other channel. Anyway here it goes.

This is not tool specific. More of a general MLOps question.

Given you have a classification model that you plan to do CT on. You have data coming in a stream. Maybe you pull data on a time basis or you pull data for training when you have n samples in the batch. How would you consider evaluating the model e.g if you're evaluating it on accuracy for simplicity reasons. Do you split the batch into train and test? That would mean you're not using all the available data you get to train the model.

What I've just thought of as a demo is, you train the model and then deploy it to staging maybe. Then, when a new batch of data comes, you first evaluate the model that is already deployed to staging and production on that batch of data. Then you train the model which is in staging on that batch of data.

How would you set up an evaluation mechanism in your MLOps pipeline I'm curious.

  
  
Posted 3 years ago
Votes Newest

Answers