Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
So, I Have Just Started Using Clearml For Local Data And Experiment Tracking And Its Been Super Helpful. Now That I Am Moving Towards Deploying And Serving The Models Using Clearml-Serving And Triton. I Have Done Some Basic Experimenting With The Provided

So, I have just started using ClearML for local data and experiment tracking and its been super helpful. Now that I am moving towards deploying and serving the models using clearml-serving and triton. I have done some basic experimenting with the provided keras_mnist example and I understand the basic process flow. However, I would like to ask the following questions about the best practices for the following scenarios:

Let's assume that I have a model trained from some task in project B being served using serving project A. What would be the best way to add another model from another project say C to the same triton server serving the previous model? Suppose that the serving project A is serving some model version 1 and a new model is trained and it starts serving model version 2, but on runtime due to some reason reason we need to revert to model version 1, what would be the best way to achieve the above? What would be the best way to get all the models trained using a certain Task, I know we can use query_models to filter models based on Project and Task, but is it the best way? Suppose that a new model version 2 is trained, but it does not fulfill our target metrics, is it possible to just save the model to model repo and not serve it, if a model version 1 is already being served?

  
  
Posted 2 years ago
Votes Newest

Answers 5


Suppose that a new model version 2 is trained, but it does not fulfill our target metrics, is it possible to just save the model to model repo and not serve it, if a model version 1 is already being served?

Sure, just do not "publish" the model, it will be stored in the model repository, fully accessible but the clearml-serving will not serve it 🙂

  
  
Posted 2 years ago

  1. Suppose that the serving project A is serving some model version 1 and a new model is trained and it starts serving model version 2, but on runtime due to some reason reason we need to revert to model version 1, what would be the best way to achieve the above?

If you archive the model, then the cleaml-session will pick the "latest" non-archived model, essentially reverting to the previous version. Also notice that it supports multiple versions on a single endpoint (again also a feature of Triton that it exposes and manages)

  
  
Posted 2 years ago

Hi RipeAnt6

What would be the best way to add another model from another project say C to the same triton server serving the previous model?

You can add multiple call to cleaml-serving , each one with a new endpoint and a new project/model to watch, then when you launch it it will setup all endpoints on a single Triton server (the model optimization loading is taken care by Triton anyhow)

  
  
Posted 2 years ago

What would be the best way to get all the models trained using a certain Task, I know we can use query_models to filter models based on Project and Task, but is it the best way?

On the Task object itself you have all the models.
Task.get_task(task_id='aabb').models['output']

  
  
Posted 2 years ago

AgitatedDove14 Thank you for your answers.

  
  
Posted 2 years ago
594 Views
5 Answers
2 years ago
one year ago
Tags
Similar posts