Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Unanswered
Hi, We Have Been Using Clearml In Our Development Environment To Train Our Models And Benchmarking Them. I Was Wondering What Is Clearml'S Role In Transition To (Production. Two Specific Points, Deployment, And Automated Retraining Pipeline.


Hi SubstantialElk6

Generically, we would 'export' the preprocessing steps, setup an inference server, and then pipe data through the above to get results. How should we achieve this with ClearML?

We are working on integrating the OpenVino serving and Nvidia Triton serving engiones, into ClearML (they will be both available soon)

Automated retraining

In cases of data drift, retraining of models would be necessary. Generically, we pass newly labelled data to fine-tune the weights on the deployed model and then redeploy without user intervention.  How should we achieve with ClearML?

So basically you write a service Task, (that can be deployed on the services queue, or packaged as a standalone container), that polls the state of the cleaml-server (i.e. checking if there is a new Dataset Task created), once it detects it, it clones the pipeline Task and put's it into execution on the services queue.

  
  
Posted 3 years ago
166 Views
0 Answers
3 years ago
one year ago