We love PR's 🙂 It would be greatly appreciated.
I think this is the relevant repo from the UI side - None
Although if it were a parameter in the SDK as well I think this might involve also the SDK and the BE.
Might have to look at the API interface to the SDK to better understand how these things are reported
Great, thanks both! I suspect this might need an extra option to be passed via the SDK, to save the iteration scaling at logging time, which the UI can then use at rendering time.
What is the best way to achieve that please?
Yeah, I understand the logic of wanting this separation of iteration vs epoch since they sometimes correlate to different 'events'. I don't think there is an elegant way out of the box to do it currently.
Maybe open a GitHub feature request to follow up on this 🙂
(actually, that might even be feasible without touching the UI, depending how the plot is rendered, but I'll check)
Hi @<1546665634195050496:profile|SolidGoose91> , I guess you are referring to scalars, we have 3 options for X axis, from the settings menu choose "Wall time" which is the closest to epoch, though it is going to normalize the clock to local time
From the doc I seemed to find ways to log 2D scatter plots, but not line plots :/ (found)
It also seems simpler to keep the scalar logging structure, but be able to pass a multiplier (reflecting the eval_n_steps
in for example Torch Lightning)
Yes, exactly. Here is the logical sense it makes: I have plots where iterations represent different units: for some these plots iterations (call them A) are optimization steps, while for others (call them B) they are evaluation iterations, occuring every N optimization steps. I would like to either:
- Change the X label so these different plots do not have the same label when they represent different things.
- Or, even better, keep the unique "iterations" label but be able to change how I log the evaluation plots B (epoch-scaled) so that it's x-axis is multiplied by the number of optimization iterations in an epoch (i.e. multiply by
dataset_size/batch_size
). Thus both A and B plots x-axis plots would be aligned. The second option would be ideal: I could see the evaluation plots on the same scale as the training.
Logging scalars also leverages ClearML automatic logging. One problem is that this automatic logging seems to keep its own internal "iteration" counter for each scalar, as opposed to keeping track of, say, the optimizer's number of steps.
That can be simply fixed on clearML python lib by allowing to set a per-scalar iteration-multiplier.
Thanks @<1523703436166565888:profile|DeterminedCrab71> . Yes, I've seen the three options to plot different things. What I'm trying to do is for the "Iterations" plot to have the same plot but just change the X label, not the time series. In matplotlib that would be a call to xlabel
.
What is the best way to achieve that please?
I think you would need to edit the webserver code to change iterations to epochs in the naming of the x axis
Happy to jump on a call if easier to make sense of it :)
Thanks. That would be very helpful. Some of our graphs are logged by optimization steps, whereas some by epochs, so having all called "Iterations" is not ideal.
logically that doesn't make sense, iteration is a different scale then time. these values are indeed hard coded
maybe this can be reported as a plot instead of scalar. this way you can build the plot as you like
I'm not sure. Maybe @<1523703436166565888:profile|DeterminedCrab71> might have some input
The problem with logging as a 2D plot is we lose the streaming: if I understand correctly the documentation, Logger.current_logger().report_scatter2d
logs a single, frozen 2D plot when you know the full X and Y data. And you would do that at each evaluation step.
Logging scalars allows to log a growing time series, i.e. add to the existing series/plot at every "iteration", thus being able to monitor the progress over time in one single plot. It's a much more logical setting.
Thanks @<1523701070390366208:profile|CostlyOstrich36> ! I'll do - and might even peek under the hood see if I can make a PR. What's the best repo for that? Is it that of the ClearML python package?