Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi, I Am Logging Some Scalars From A Simple Ml Experiment, And Im Seing Different Curves Logged With Clearml Depending On When I Run The Code. At The Moment We Are Basically Drawing Some Random Numbers And Logging Their Values, But The Frequency Between E

Hi, i am logging some scalars from a simple ML experiment, and im seing different curves logged with clearml depending on when i run the code. At the moment we are basically drawing some random numbers and logging their values, but the frequency between each measurement point on the clearml website, differs between runs. Some runs there are minutes between measurements and some there are seconds. We can see that the code reaches the logging api every second and would expect the curves on the website to have datapoints every 1 second instead of every minute. The curves with 1 minute delays between datapoints also have far lower variance than the curves which has datapoints every second, which suggests that clearml is averaging the logged scalars some times, and sometimes not. The code being run are identical just at different times. Is clearml averaging in its logging somewhere or is there something we are missing? Appreciate your help!

  
  
Posted 2 months ago
Votes Newest

Answers 4


With the logger.report_scalar

  
  
Posted 2 months ago

Hi @<1671689458606411776:profile|StormySeaturtle98> ,how are you reporting these numbers?

  
  
Posted 2 months ago

It seems that the longer it runs, and the more datapoints it accumulates, it starts averaging over iterations

  
  
Posted 2 months ago

Hi @<1523701087100473344:profile|SuccessfulKoala55> We wonder if this is intended behavior from clearml? That the longer the experiment runs, it start averaging over datapoints. We have had experiments where the variance in the datapoints were significant, and this "feature" meant that our variance would become smaller and smaller?

  
  
Posted one month ago
264 Views
4 Answers
2 months ago
one month ago
Tags
Similar posts