Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
We'Re Working On Clearml Serving Right Now And Are Very Interested In What You All Are Searching For In A Serving Engine, So We Can Make The Best Serving Engine We Can

We're working on ClearML serving right now and are very interested in what you all are searching for in a serving engine, so we can make the best serving engine we can 💪 Let us know or drop into this reddit discussion to learn from others and add your own thoughts on the subject: https://www.reddit.com/r/mlops/comments/urp815/what_are_you_missing_in_current_model_serving/

Maybe we missed something really obvious that should be in there

  
  
Posted 2 years ago
Votes Newest

Answers 5


clearml-serving does not support Spacy models out of the box among many others and that Clearml-Serving only supports following;
Support Machine Learning Models (Scikit Learn, XGBoost, LightGBM)
Support Deep Learning Models (Tensorflow, PyTorch, ONNX).
An easy way to extend support to different models would be a boon.

I believe in such scenarios, a custom engine would be required. I would like to know, how difficult is it to create a custom engine with clearml-serving? For example, in this case Spacy? Another point to note is the MLFlow is able to support a multitude of models from dev to deployment. Is ClearML and ClearML-Serving going to support as much as well?
This discussion can also touch on the points of how ClearML-Serving will evolve from this month's release.
Gluon
H2O
Keras
Prophet
PyTorch
XGBoost
LightGBM
Statsmodels
Glmnet (R)
SpaCy
Fastai
SHAP
Prophet
Pmdarima
Diviner
scikit-learn
Diabetes example
Elastic Net example
Logistic Regression example
TensorFlow
TensorFlow 1.X
TensorFlow 2.X
RAPIDS
Random Forest Classifier

  
  
Posted 2 years ago

Hi Jax! Thanks for the feedback, we really appreciate it 😄

MLFlow is able to support a multitude of models from dev to deployment. Is ClearML and ClearML-Serving going to support as much as well?

Do you mean by this that you want to be able to seamlessly deploy models that were tracked using ClearML experiment manager with ClearML serving?

I believe in such scenarios, a custom engine would be required. I would like to know, how difficult is it to create a custom engine with clearml-serving?

Do you want clearml serving to accept a "custom engine" argument that uses code you tracked using the experiment manager to serve it, or do you think it's better to have good documentation on how to write a custom/spacy/shap whatever you need extention for clearml-serving itself and then just deploy the space model for example using your self-built spacy engine?

  
  
Posted 2 years ago

Do you mean by this that you want to be able to seamlessly deploy models that were tracked using ClearML experiment manager with ClearML serving?

Ideally that's best. Imagine that i used Spacy (Among other frameworks) and i just need to add the one or two lines of clearml codes in my python scripts and i get to track the experiments. Then when it comes to deployment, i don't have to worry about Spacy having a model format that Triton doesn't recognise.

Do you want clearml serving to accept a "custom engine" argument that uses code you tracked using the experiment manager to serve it, or do you think it's better to have good documentation on how to write a custom/spacy/shap whatever you need extention for clearml-serving itself and then just deploy the space model for example using your self-built spacy engine?

I don't quite understand the former. For the latter, i think its always good to be able to quickly create an inference engine for those obscure ML frameworks. This is important as this engine can be easily reused and we don't hit overheads trying hard to make this work with clearml-serving.

  
  
Posted 2 years ago

I think a related question is, ClearML replies heavily on Triton (Good thing) but Triton only support a few frameworks out of the box. So this 'engine' need to make sure its can work with Triton and use all its wonderful features such as request batching, GPU reuse...etc.

  
  
Posted 2 years ago

Thanks again for the extra info Jax, we'll take it back to our side and see what we can do 🙂

  
  
Posted 2 years ago
1K Views
5 Answers
2 years ago
one year ago
Tags