Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi, We Have Been Using Clearml In Our Development Environment To Train Our Models And Benchmarking Them. I Was Wondering What Is Clearml'S Role In Transition To (Production. Two Specific Points, Deployment, And Automated Retraining Pipeline.

Hi, we have been using ClearML in our development environment to train our models and benchmarking them. I was wondering what is ClearML's role in transition to (production. Two specific points, deployment, and automated retraining pipeline.

Deployment
Generically, we would 'export' the preprocessing steps, setup an inference server, and then pipe data through the above to get results. How should we achieve this with ClearML?

Automated retraining
In cases of data drift, retraining of models would be necessary. Generically, we pass newly labelled data to fine-tune the weights on the deployed model and then redeploy without user intervention. How should we achieve with ClearML?

  
  
Posted 3 years ago
Votes Newest

Answers 2


These are excellent questions. While we are working towards including more of our users stack within the ClearML solution, there is still time until we unveil "the clearml approach" to these. From what I've seen within our community, deployment can anything from a simple launch of a docker built with 'clearml-agent build' to auto training pipelines.

Re triggering - this is why we have clearml-task 😉

  
  
Posted 3 years ago

Hi SubstantialElk6

Generically, we would 'export' the preprocessing steps, setup an inference server, and then pipe data through the above to get results. How should we achieve this with ClearML?

We are working on integrating the OpenVino serving and Nvidia Triton serving engiones, into ClearML (they will be both available soon)

Automated retraining

In cases of data drift, retraining of models would be necessary. Generically, we pass newly labelled data to fine-tune the weights on the deployed model and then redeploy without user intervention.  How should we achieve with ClearML?

So basically you write a service Task, (that can be deployed on the services queue, or packaged as a standalone container), that polls the state of the cleaml-server (i.e. checking if there is a new Dataset Task created), once it detects it, it clones the pipeline Task and put's it into execution on the services queue.

  
  
Posted 3 years ago
558 Views
2 Answers
3 years ago
one year ago
Tags