Hi, i'm working on a post deployment data and model monitoring using ClearML. The idea is this.
Use ClearML to serve my model out to Triton. Data MonitoringCreate a ClearML Pipeline with codes to perform Data Drift detection for this model. The codes uses tools like Evidently or Alibi detect. Trigger the pipeline using ClearML Task whenever i have sufficient new production data. After running pipeline, would be plotting some UI and recommendations.Model Performance monitoringCreate a ClearML Pipeline with codes to perform Data Drift detection for this model. The code simply batch production annotation and run it through the model. Trigger the pipeline using ClearML Task whenever i have sufficient annotated new data. After running pipeline, would be plotting some UI and recommendations.
The tricky parts are;
How should i present the UI and recommendations from within ClearML UI. How should i present a 'decision button' within ClearML UI for users to trigger a retraining?
Some MLOps tools do much of these automatically, such as Valohai and Iguazio.
Would appreciate advice if anyone tried doing something similar for their models in production.