Hi Oriel!
If you want to only serve an if-else model, why do you want to use clearml-serving for that? What do you mean by "online featurer"?
Hi DeterminedCrocodile36 ,
To use a custom engine you need to change the process tree.
https://github.com/allegroai/clearml-serving/tree/main/examples/pipeline
Section 3 is what you're interested in
And here is an example of the code you need to change. I think it's fairly straightforward.
https://github.com/allegroai/clearml-serving/blob/main/examples/pipeline/preprocess.py
Adding a custom engine example is on the 'to do' list but if you manage to add a PR with an example it would be great 🙂
Because I want different models versions in the same API fashion. Some of them are RuleBased som of them are MLbased.
- At the inference I need to ask som other features from distinct sources. Ex: Some third ApI request.
Like Nathan said, custom engines are a TODO, but for your second question, you can add that API request in the model preprocessing, which is a function you can define yourself! It will be ran every time a request comes in and you can do whatever you want in it and change the incoming data however you wish 🙂
example: https://github.com/allegroai/clearml-serving/blob/main/examples/keras/preprocess.py
But if I'm going to use Preprocessing Layer for tha then I won't be able to seeat ClearML Models the methods for RuleBased vs The ML based models.
What do you mean exactly? Is it that you want more visibility into what kind of preprocessing code is running for each endpoint?