@<1564060248291938304:profile|AmusedParrot89> - let me check this and get back to you
@<1564060248291938304:profile|AmusedParrot89> - I see the logic in displaying the last iteration per metric in the compare screen. We will need to think if this won't cause any other issues.
In the mean time - may I ask you to open a github issue - so it will be easier to track?
@<1564060248291938304:profile|AmusedParrot89> I'll have to check regarding the None
value for the iteration, but for sure this means it's overwriting the last report every time you report a new one
@<1523703097560403968:profile|CumbersomeCormorant74> - will do
I’m reporting Validation Report (best)
every time I find a better model, and report Validation Report (latest)
every time. The Evaluation Report
is something I run after the training itself is complete, so it’s not tied to a specific iteration (I’m passing None).
So, if this only shows the latest iteration, the solution would be to report all three series at the last iteration? Is there a a different way to report plots that are not tied to an iteration?
@<1523701087100473344:profile|SuccessfulKoala55> - yes, plots are reported every iteration.
@<1564060248291938304:profile|AmusedParrot89> - the plot comparison indeed compares the latest iteration of the experiments. I will see if this can be better indicated somewhere
@<1564060248291938304:profile|AmusedParrot89> are you reporting these for every iteration, or once?