@<1523701205467926528:profile|AgitatedDove14> Thanks! The only thing is that I prefer serving my models in-house and only performing the monitoring via ClearML. By the way, I saw there is a project dashboard app which might support the visualization I am looking for. Is it suitable for such use case?
I prefer serving my models in-house and only performing the monitoring via ClearML.
clearml-serving
is an infrastructure for you to run models 🙂
to clarify, clearml-serving
is running on your end (meaning this is not SaaS where a 3rd party is running the model)
By the way, I saw there is a project dashboard app which might support the visualization I am looking for. Is it suitable for such use case?
Hmm interesting, actually it might, it does collect matrices over time and averages them
Hi @<1523701205467926528:profile|AgitatedDove14> ,
I guess I can log the input-output pairs and report the average accuracy as a scalar. However, I'm not sure if this is the right way to monitor my data. Obviously, using iterations makes sense when training a model and tracking the loss, but when we are in production, I'm not sure if this dashboard is meant for that purpose.
so firs yes, I totally agree. This is why the clearml-serving
has a dedicated statistics module that creates histograms over time, then we push it into Prometheus and connect grafana to it for dashboards and alerts.
To be honest, I would just use it instead of reporting manually, wdyt?
Hi @<1724960475575226368:profile|GloriousKoala29>
Is there a way to aggregate the results, such as defining an iteration as the accuracy of 100 samples
Hmm, i'm assuming what you actually want is to store it with the actual input/output and a score, is that correct?