Hi @<1523704207914307584:profile|ObedientToad56> , I would assume that will require an integration of the engine to the clearml-serving code (and a PR 🙂 )
Hi @<1523704207914307584:profile|ObedientToad56>
hat would be the right way to extend this with let's say a custom engine that is currently not supported ?
as you said 'custom' 🙂
None
This is actually a custom
engine, (see (3) in the readme, and the preprocessing.py
implementing it) I think we should actually add a specific example to custom
so this is more visible. Any thoughts on what would be an easy one?
Hmm, thanks @<1523701087100473344:profile|SuccessfulKoala55> what would be the right way that you would recommend for adding support for other models/frameworks like spacy
.
Would you recommend adding other models by sending PR in line with the lightgbm example here
None
or use the custom
option and move the logic for loading the model to preprocess
or process
?
Thanks @<1523701205467926528:profile|AgitatedDove14> . For now i have forked the clearml-serving
locally and added an engine for spacy
. It is working fine. Yeah, i think some documentation and a good example would make it more visible. An example for something like spacy would be useful for the community.
So from what I see, the custom engine will basically call the preprocess()
method defined in the Preprocess
class you define
An example for something like spacy would be useful for the community.
That awesome, any chance you can PR something? (no need for it to be perfect, we can take it from there)
@<1523701087100473344:profile|SuccessfulKoala55> I saw in the examples one case of engine being passed as custom
.
My requirement is the need for supporting let's say other frameworks like spacy
. So I was thinking maybe i could create a pipeline that does the model load and inference and pass that pipeline. I am still figuring out the ecosystem, would something like that make sense?