Do you mean by this that you want to be able to seamlessly deploy models that were tracked using ClearML experiment manager with ClearML serving?
Ideally that's best. Imagine that i used Spacy (Among other frameworks) and i just need to add the one or two lines of clearml codes in my python scripts and i get to track the experiments. Then when it comes to deployment, i don't have to worry about Spacy having a model format that Triton doesn't recognise.
Do you want clearml serving to accept a "custom engine" argument that uses code you tracked using the experiment manager to serve it, or do you think it's better to have good documentation on how to write a custom/spacy/shap whatever you need extention for clearml-serving itself and then just deploy the space model for example using your self-built spacy engine?
I don't quite understand the former. For the latter, i think its always good to be able to quickly create an inference engine for those obscure ML frameworks. This is important as this engine can be easily reused and we don't hit overheads trying hard to make this work with clearml-serving.