Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Unanswered
Hi, I Am Trying To Report The Validation Accuracy And Loss Values To The Dashboard. Do I Need To Manually Log These Values Or Is There Any Other Way To Do It? How Can I See These In A Plot In The Dashboard?


Hi GrittyHawk31 ! ClearML is integrated with a bunch of frameworks from which it tries to automatically gather information. You can find a list here: https://clear.ml/docs/latest/docs/integrations/libraries

For example, if you're already reporting scalars to tensorboard, you won't have to add any clearml code, it will automatically be captured. The same will happen with e.g. LightGBM. Take a look at the example codes in the link to find what is automatically supported for your framework.

Manually adding a scalar is of course possible too and very easy:
` # set up the experiment manager
from clearml import Task
task = Task.init(project_name=x, task_name=y)

... Your code ...

task.get_logger().report_scalar(...) Reporting a scalar will automatically plot it for you in the webUI. More info about report_scalar() ` here: https://clear.ml/docs/latest/docs/references/sdk/logger#report_scalar

  
  
Posted 2 years ago
165 Views
0 Answers
2 years ago
one year ago