Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi, Is There A Way To Log Sklearn Metrices (Like Accuracy/Precision) In A Tabular Way Rather Than Plot ?

Hi, is there a way to log sklearn metrices (like accuracy/precision) in a tabular way rather than plot ?

  
  
Posted 2 years ago
Votes Newest

Answers 11


HappyDove3 Notice that in https://github.com/allegroai/clearml/issues/400 the goal is to see a table plot in the UI scalars tab for a specific experiment (with additional discussions on how these will be addressed when comparing experiments).
Note that once you take the approach you suggested of logging your metrics single values, you can configure your experiment comparison scalars view to show single values instead of the time-series graph which I think will provide you with the matrix comparison you're looking for. Does that work?

  
  
Posted 2 years ago

The webUI won't calculate anything for you, so if you want to see accuracy, precision etc. you'll have to calculate those in your code and then report them as scalers for the UI to know what they are.

As you said, sklearn models don't work with iterations, so when using the logger to report the scalar, you can just set the iteration nr to 0.
The downside of that is that your scalars will still be plotted, only as a single point in the graph instead of in a table. This is a known open issue for now ( https://github.com/allegroai/clearml/issues/400 ), but would be cool if you could build it and contribute it yourself if you can 😄

  
  
Posted 2 years ago

If I'll have time I will take a look 🙂 https://github.com/allegroai/clearml/issues/400

  
  
Posted 2 years ago

I agree, I came across the same issue too. But your post helps make it clear, so hopefully it can be pushed! 🙂

  
  
Posted 2 years ago

Thanks! I'll try runing it with iteration=0 and then adding the value as a field in the experiment overview.

  
  
Posted 2 years ago

Seems like a quiet important feature for basic matrices comparison no ?

  
  
Posted 2 years ago

I started to explore clearml as an experiment tracker and I run a simple example of sklearn randomForest. I want to log the accuracy score to compare few experiments based on this index and I'm not sure how to log it.
This is a simple use case not a scalar and iterations.

  
  
Posted 2 years ago

As you can see in the issue comments, for now you can report it using iteration=0 and then adding the value as a field in the experiment overview (as leaderboard). This will give you a quick overview of your metrics per experiment in the main experiment list 🙂

  
  
Posted 2 years ago

HappyDove3 Hi 🙂

Well, since all the data is logged you can simply use the API to retrieve it and create the tables yourself quite easily!

  
  
Posted 2 years ago

You can do it by comparing experiments, what is your use case? I think I might be missing something. Can you please elaborate?

  
  
Posted 2 years ago

I thought I'll have a way to compare my models accuracy, precision, recall etc through the web UI

  
  
Posted 2 years ago
529 Views
11 Answers
2 years ago
one year ago
Tags