Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi. I'M Using Clearml For Logging My Experiments. Can I Compare Experiments By Plotting Graphs? For Example, Every Experiment Logs The Time Per Training Iteration And The Accuracy Per Epoch. I Want To Create A Graph With "Average Time Per Iteration" As X-

Hi. I'm using ClearML for logging my experiments. Can I compare experiments by plotting graphs? For example, every experiment logs the time per training iteration and the accuracy per epoch. I want to create a graph with "average time per iteration" as X-axis and "maximum accuracy" in the Y-label. Is it possible to aggregate these values from all experiments and plot a graph in the dashboard?

  
  
Posted 3 years ago
Votes Newest

Answers 11


image

  
  
Posted 3 years ago

image

  
  
Posted 3 years ago

AgitatedDove14 I'm looking for something else. Basically, I want to compare the accuracy with respect to another variable. I can get the accuracy of each experiment (see picture). Additionally, I can log the other variable (name it T) and show it as a scalar per experiment. However, I don't see an option How I can simply plot the graph of accuracy versus T, which is different for every experiment. The solution that loops thru all tasks might work anyway.

  
  
Posted 3 years ago

SoreDragonfly16 . In the hyper parameters Tab, you have "parallel coordinates" (next to the "add experiment" the button saying "values" press on it and there should be " parallel coordinates")
Is that it?

  
  
Posted 3 years ago

very nice

  
  
Posted 3 years ago

because comparing experiments using graphs is very useful. I think it is a nice to have feature.

So currently when you compare the graphs you can select the specific scalars to compare, and it Update in Real Time!
You can also bookmark the actual URL and it is fully reproducible (i.e. full state is stored)
You can also add custom columns to the experiment table (with the metrics) and sort / filter based on them, and create a summary dashboard (again like ll pages in the web app, URL is fully reproducible, so you can bookmark this dashboard)
SoreDragonfly16 wdyt ?

  
  
Posted 3 years ago

SoreDragonfly16 as SmallDeer34 mentioned, you can iterate over the Tasks, pull metrics (with either task.get_last_scalar_metrics or task.get_reported_scalar ) then report them on the Task that runs the Loop itself with the Logger.
wdyt?

  
  
Posted 3 years ago

So presumably you could write a Python loop that goes through and pulls the metrics into a list, then make a plot locally. Not sure about creating a Dashboard within the ClearML web interface though!

  
  
Posted 3 years ago

AgitatedDove14 That would work ofcourse. I wonder what is the best practice to do such things because comparing experiments using graphs is very useful. I think it is a nice to have feature.

  
  
Posted 3 years ago

This discussion might be relevant, it shows how to query a Task for metrics in code: https://clearml.slack.com/archives/CTK20V944/p1626992991375500?thread_ts=1626981377.374400&cid=CTK20V944

  
  
Posted 3 years ago

Oh, that's cool, didn't know about that:

  
  
Posted 3 years ago