Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi Everyone! I Started Using Clearml To Track Some Of Our Experiments Last Week (Currently Using The Pro Tier), But I’M Having Some Issues Trying To Compare The Plots Of Two Experiments. Each Experiment Has Three Tables As Plots - One As A Plot With A Sin

Hi everyone! I started using ClearML to track some of our experiments last week (currently using the pro tier), but I’m having some issues trying to compare the plots of two experiments. Each experiment has three tables as plots - one as a plot with a single series ( FinalEvaluation Report/ valuation Report ), and one plot with two series ( Validation Report with Validation Report (best) and Validation Report (latest ). The first image below shows what the tables look like when looking at a single experiment.
When I try to compare two experiments ( Baseline from main 04/28 and Running test again for comparison ), the plots tab only shows one of these series side by side (see second images below). It appears to be showing only the Validation Report plot, and only the second series (the latest series, even though it doesn’t say that anywhere).

Is this a problem in how I report these plots, or a known issue?
Thanks!
image
image

  
  
Posted one year ago
Votes Newest

Answers 7


@<1523701087100473344:profile|SuccessfulKoala55> - yes, plots are reported every iteration.
@<1564060248291938304:profile|AmusedParrot89> - the plot comparison indeed compares the latest iteration of the experiments. I will see if this can be better indicated somewhere

  
  
Posted one year ago

@<1564060248291938304:profile|AmusedParrot89> - let me check this and get back to you

  
  
Posted one year ago

@<1523703097560403968:profile|CumbersomeCormorant74> - will do

  
  
Posted one year ago

I’m reporting Validation Report (best) every time I find a better model, and report Validation Report (latest) every time. The Evaluation Report is something I run after the training itself is complete, so it’s not tied to a specific iteration (I’m passing None).

So, if this only shows the latest iteration, the solution would be to report all three series at the last iteration? Is there a a different way to report plots that are not tied to an iteration?

  
  
Posted one year ago

@<1564060248291938304:profile|AmusedParrot89> are you reporting these for every iteration, or once?

  
  
Posted one year ago

@<1564060248291938304:profile|AmusedParrot89> - I see the logic in displaying the last iteration per metric in the compare screen. We will need to think if this won't cause any other issues.
In the mean time - may I ask you to open a github issue - so it will be easier to track?

  
  
Posted one year ago

@<1564060248291938304:profile|AmusedParrot89> I'll have to check regarding the None value for the iteration, but for sure this means it's overwriting the last report every time you report a new one

  
  
Posted one year ago