That's interesting, how would you select experiments to be viewed by the dashboard?
EnviousStarfish54 are those scalars reported ?
If they are, you can just do:task_reporting = Task.init(project_name='project', task_name='report') tasks = Task.get_tasks(project_name='project', task_name='partial_task_name_here') for t in tasks: t.get_last_scalar_metrics() task_reporting.get_logger().report_something
There are several ways of doing what you need, but none of them are 'magical' like we pride ourselves for. For that, we would need user input like yours in order to find the commonalities.
I am abusing the "hyperparameters" to have a "summary" dictionary to store my key metrics, due to the nicer behaviour of diff-ing across experiments.
It would be nice if there is an "export" function to just export all/selected experiment table view
task_reporting = Task.init(project_name='project', task_name='report') tasks = Task.get_tasks(project_name='project', task_name='partial_task_name_here') for t in tasks: t.get_last_scalar_metrics() task_reporting.get_logger().report_something
Instead of get_last_scalar_metrics()
, I am using t._data.hyperparams['summary'] to get the metrics I needed
For example, I am logging these metrics as a "configuration/hyperparameters". The reason I am not using report_scalar() because it only support the "last/min/max". This way I can control whatever custom logic I need in my code.
I need to compare the metadata across experiments. Although the dashboard support choosing "min/max/last", it cannot support comparing "the lowest loss" across experiment.