Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi All, I'M Want To Compare 2 Different Tasks Plots, E.G., I Have A Histogram Of F1 Values For 2 Different Models. However, They Histogram Comparison Only Shrinks The Bars (See Attached Image). Is There A Way To Make The Bars With Normal Width / Compare T

Hi all,
I'm want to compare 2 different tasks plots, e.g., I have a histogram of F1 values for 2 different models.
However, they histogram comparison only shrinks the bars (see attached image).
Is there a way to make the bars with normal width / compare the histograms one over another using transparent colors?
image

  
  
Posted one year ago
Votes Newest

Answers 6


Hi @<1582179661935284224:profile|AbruptJellyfish92> , looks like a bug. Can you please open a GitHub issue to follow up on this?

  
  
Posted one year ago

In addition, when I report a histogram with many bins (for example, 60) i get the same result of thin bars:
Seems like a problem with the configurations of the UI plots in general
image

  
  
Posted one year ago

That is more relevant for "clearml-web", right?

  
  
Posted one year ago

@<1523701070390366208:profile|CostlyOstrich36> None

  
  
Posted one year ago

Hi @<1582179661935284224:profile|AbruptJellyfish92> , how do the histograms look when you're not in comparison mode?

Can you provide a self contained snippet that creates such histograms that reproduce this behavior please?

  
  
Posted one year ago

Hi @<1523701070390366208:profile|CostlyOstrich36> ,
Here is the snippet and the original histogram:

for metric in evaluator.metrics:
print(f"reporting plots for metric {metric}")
task.logger.report_single_value("avg_" + metric, float(results[metric].mean()))
plt.figure()
plt.hist(results[metric].dropna(), bins=25)
fig = plt.gcf()
task.logger.report_matplotlib_figure(metric, metric, fig)
image

  
  
Posted one year ago
972 Views
6 Answers
one year ago
one year ago
Tags