Hi SubstantialElk6
Generically, we would 'export' the preprocessing steps, setup an inference server, and then pipe data through the above to get results. How should we achieve this with ClearML?
We are working on integrating the OpenVino serving and Nvidia Triton serving engiones, into ClearML (they will be both available soon)
Automated retraining
In cases of data drift, retraining of models would be necessary. Generically, we pass newly labelled data to fine-tune the weights on the deployed model and then redeploy without user intervention. How should we achieve with ClearML?
So basically you write a service Task, (that can be deployed on the services queue, or packaged as a standalone container), that polls the state of the cleaml-server (i.e. checking if there is a new Dataset Task created), once it detects it, it clones the pipeline Task and put's it into execution on the services queue.