Not ClearML employee (just a recent user), but maybe this will help? None
Hi Damjan, thank you for your message.
But If I understand correctly, that doc would be great for online serving. I am looking a solution for batch inference instead.
Hi there!
I had a question regarding batch inference with clearml.
I would like to serve a model using an inference task (containing the model and the code to perform the inference) as a base to be cloned and edited (change input arguments), and queue to be processed by a remote worker.
Is this a correct way to do batch inference? What is the best practice to achieve this using docker?
thanks in advanced for your answer!
Best regards
Not ClearML employee (just a recent user), but maybe this will help? None
Hi Damjan, thank you for your message.
But If I understand correctly, that doc would be great for online serving. I am looking a solution for batch inference instead.