clearml-serving does not support Spacy models out of the box among many others and that Clearml-Serving only supports following;
Support Machine Learning Models (Scikit Learn, XGBoost, LightGBM)
Support Deep Learning Models (Tensorflow, PyTorch, ONNX).
An easy way to extend support to different models would be a boon.
I believe in such scenarios, a custom engine would be required. I would like to know, how difficult is it to create a custom engine with clearml-serving? For example, in this case Spacy? Another point to note is the MLFlow is able to support a multitude of models from dev to deployment. Is ClearML and ClearML-Serving going to support as much as well?
This discussion can also touch on the points of how ClearML-Serving will evolve from this month's release.
Gluon
H2O
Keras
Prophet
PyTorch
XGBoost
LightGBM
Statsmodels
Glmnet (R)
SpaCy
Fastai
SHAP
Prophet
Pmdarima
Diviner
scikit-learn
Diabetes example
Elastic Net example
Logistic Regression example
TensorFlow
TensorFlow 1.X
TensorFlow 2.X
RAPIDS
Random Forest Classifier
Hi Jax! Thanks for the feedback, we really appreciate it 😄
MLFlow is able to support a multitude of models from dev to deployment. Is ClearML and ClearML-Serving going to support as much as well?
Do you mean by this that you want to be able to seamlessly deploy models that were tracked using ClearML experiment manager with ClearML serving?
I believe in such scenarios, a custom engine would be required. I would like to know, how difficult is it to create a custom engine with clearml-serving?
Do you want clearml serving to accept a "custom engine" argument that uses code you tracked using the experiment manager to serve it, or do you think it's better to have good documentation on how to write a custom/spacy/shap whatever you need extention for clearml-serving itself and then just deploy the space model for example using your self-built spacy engine?
Do you mean by this that you want to be able to seamlessly deploy models that were tracked using ClearML experiment manager with ClearML serving?
Ideally that's best. Imagine that i used Spacy (Among other frameworks) and i just need to add the one or two lines of clearml codes in my python scripts and i get to track the experiments. Then when it comes to deployment, i don't have to worry about Spacy having a model format that Triton doesn't recognise.
Do you want clearml serving to accept a "custom engine" argument that uses code you tracked using the experiment manager to serve it, or do you think it's better to have good documentation on how to write a custom/spacy/shap whatever you need extention for clearml-serving itself and then just deploy the space model for example using your self-built spacy engine?
I don't quite understand the former. For the latter, i think its always good to be able to quickly create an inference engine for those obscure ML frameworks. This is important as this engine can be easily reused and we don't hit overheads trying hard to make this work with clearml-serving.
I think a related question is, ClearML replies heavily on Triton (Good thing) but Triton only support a few frameworks out of the box. So this 'engine' need to make sure its can work with Triton and use all its wonderful features such as request batching, GPU reuse...etc.
Thanks again for the extra info Jax, we'll take it back to our side and see what we can do 🙂