Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Is There Any Way To Change The X-Axis On The Charts For Scalars, To Say E.G. "Epochs" Instead Of "Iterations"? Or Is That Hardcoded?

Is there any way to change the x-axis on the charts for scalars, to say e.g. "epochs" instead of "iterations"? Or is that hardcoded?

  
  
Posted one year ago
Votes Newest

Answers 20


I'm not sure. Maybe @<1523703436166565888:profile|DeterminedCrab71> might have some input

  
  
Posted one year ago

Thanks. That would be very helpful. Some of our graphs are logged by optimization steps, whereas some by epochs, so having all called "Iterations" is not ideal.

  
  
Posted one year ago

Hi @<1546665634195050496:profile|SolidGoose91> , I guess you are referring to scalars, we have 3 options for X axis, from the settings menu choose "Wall time" which is the closest to epoch, though it is going to normalize the clock to local time

  
  
Posted one year ago

Thanks @<1523703436166565888:profile|DeterminedCrab71> . Yes, I've seen the three options to plot different things. What I'm trying to do is for the "Iterations" plot to have the same plot but just change the X label, not the time series. In matplotlib that would be a call to xlabel .

  
  
Posted one year ago

logically that doesn't make sense, iteration is a different scale then time. these values are indeed hard coded

  
  
Posted one year ago

Yes, exactly. Here is the logical sense it makes: I have plots where iterations represent different units: for some these plots iterations (call them A) are optimization steps, while for others (call them B) they are evaluation iterations, occuring every N optimization steps. I would like to either:

  • Change the X label so these different plots do not have the same label when they represent different things.
  • Or, even better, keep the unique "iterations" label but be able to change how I log the evaluation plots B (epoch-scaled) so that it's x-axis is multiplied by the number of optimization iterations in an epoch (i.e. multiply by dataset_size/batch_size). Thus both A and B plots x-axis plots would be aligned. The second option would be ideal: I could see the evaluation plots on the same scale as the training.
  
  
Posted one year ago

What is the best way to achieve that please?

  
  
Posted one year ago

Happy to jump on a call if easier to make sense of it :)

  
  
Posted one year ago

What is the best way to achieve that please?

I think you would need to edit the webserver code to change iterations to epochs in the naming of the x axis

  
  
Posted one year ago

maybe this can be reported as a plot instead of scalar. this way you can build the plot as you like

  
  
Posted one year ago

From the doc I seemed to find ways to log 2D scatter plots, but not line plots :/ (found)
It also seems simpler to keep the scalar logging structure, but be able to pass a multiplier (reflecting the eval_n_steps in for example Torch Lightning)

  
  
Posted one year ago

The problem with logging as a 2D plot is we lose the streaming: if I understand correctly the documentation, Logger.current_logger().report_scatter2d logs a single, frozen 2D plot when you know the full X and Y data. And you would do that at each evaluation step.
Logging scalars allows to log a growing time series, i.e. add to the existing series/plot at every "iteration", thus being able to monitor the progress over time in one single plot. It's a much more logical setting.

  
  
Posted one year ago

Logging scalars also leverages ClearML automatic logging. One problem is that this automatic logging seems to keep its own internal "iteration" counter for each scalar, as opposed to keeping track of, say, the optimizer's number of steps.
That can be simply fixed on clearML python lib by allowing to set a per-scalar iteration-multiplier.

  
  
Posted one year ago

Does that make sense?

  
  
Posted one year ago

Yeah, I understand the logic of wanting this separation of iteration vs epoch since they sometimes correlate to different 'events'. I don't think there is an elegant way out of the box to do it currently.

Maybe open a GitHub feature request to follow up on this 🙂

  
  
Posted one year ago

Thanks @<1523701070390366208:profile|CostlyOstrich36> ! I'll do - and might even peek under the hood see if I can make a PR. What's the best repo for that? Is it that of the ClearML python package?

  
  
Posted one year ago

(do you welcome PRs?)

  
  
Posted one year ago

We love PR's 🙂 It would be greatly appreciated.
I think this is the relevant repo from the UI side - None
Although if it were a parameter in the SDK as well I think this might involve also the SDK and the BE.

Might have to look at the API interface to the SDK to better understand how these things are reported

  
  
Posted one year ago

Great, thanks both! I suspect this might need an extra option to be passed via the SDK, to save the iteration scaling at logging time, which the UI can then use at rendering time.

  
  
Posted one year ago

(actually, that might even be feasible without touching the UI, depending how the plot is rendered, but I'll check)

  
  
Posted one year ago
1K Views
20 Answers
one year ago
one year ago
Tags