Hi everyone! I started using ClearML to track some of our experiments last week (currently using the pro tier), but I’m having some issues trying to compare ...
one year ago
I’m reporting Validation Report (best)
every time I find a better model, and report Validation Report (latest)
every time. The Evaluation Report
is something I run after the training itself is complete, so it’s not tied to a specific iteration (I’m passing None).
So, if this only shows the latest iteration, the solution would be to report all three series at the last iteration? Is there a a different way to report plots that are not tied to an iteration?
@<1523703097560403968:profile|CumbersomeCormorant74> - will do