Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
I Need To Create Some Meta-Analysis Of My Experiments. Is There A "Dashboard" View For Trains That I Can Create Plots For All Experiments Metadata? If Not, Is There An Easy Way To Export The Tables So I Can Make This Plot Locally. I Want To Create A "Kp

I need to create some meta-analysis of my experiments. Is there a "dashboard" view for trains that I can create plots for all experiments metadata? If not, is there an easy way to export the tables so i can make this plot locally.

I want to create a "KPI" view to see how are the key metrics changing across experiment/time

for example
I have 3 experiments, I may plot a simple line chart
exp1: accuracy 90%
exp2: accuracy 90.1%
exp3: accuracy 91%

  
  
Posted 3 years ago
Votes Newest

Answers 7


I am abusing the "hyperparameters" to have a "summary" dictionary to store my key metrics, due to the nicer behaviour of diff-ing across experiments.

  
  
Posted 3 years ago

It would be nice if there is an "export" function to just export all/selected experiment table view

  
  
Posted 3 years ago

task_reporting = Task.init(project_name='project', task_name='report') tasks = Task.get_tasks(project_name='project', task_name='partial_task_name_here') for t in tasks: t.get_last_scalar_metrics() task_reporting.get_logger().report_somethingInstead of get_last_scalar_metrics() , I am using t._data.hyperparams['summary'] to get the metrics I needed

  
  
Posted 3 years ago

That's interesting, how would you select experiments to be viewed by the dashboard?

  
  
Posted 3 years ago

There are several ways of doing what you need, but none of them are 'magical' like we pride ourselves for. For that, we would need user input like yours in order to find the commonalities.

  
  
Posted 3 years ago

EnviousStarfish54 are those scalars reported ?
If they are, you can just do:
task_reporting = Task.init(project_name='project', task_name='report') tasks = Task.get_tasks(project_name='project', task_name='partial_task_name_here') for t in tasks: t.get_last_scalar_metrics() task_reporting.get_logger().report_something

  
  
Posted 3 years ago

For example, I am logging these metrics as a "configuration/hyperparameters". The reason I am not using report_scalar() because it only support the "last/min/max". This way I can control whatever custom logic I need in my code.

I need to compare the metadata across experiments. Although the dashboard support choosing "min/max/last", it cannot support comparing "the lowest loss" across experiment.

  
  
Posted 3 years ago