Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hello! I Add To Inject The Configuration Into Clearml With

Hello!
I add to inject the configuration into clearml with task.connect_configuration( http://OmegaConf.to _container(cfg, resolve=True)) because I use hydra as config manager.
When comparing hyper parameters of my runs, I am unable to condition the high dimensional plot on these parameters as only the CLI arguments are available. As I use Hydra the CLI does not contains the information I am looking for. See below the list of proposed variables. How can I use other configuration as hyper parameters ? Thanks for your help

  
  
Posted 3 years ago
Votes Newest

Answers 30


Below is an example with one metric reported using multirun. This is taken from a single experiment result page as all runs feed the same experiment. Unfortunately I have no idea what 1 refers to for example. Is it possible to name each run or to break them into several experiments ?

  
  
Posted 3 years ago

I think it would make sense to have one task per run to make the comparison on hyper-parameters easier

I agree. Could you maybe open a GitHub issue on it, I want to make sure we solve this issue 🙂

  
  
Posted 3 years ago

It's a running number because PL is creating the same TB file for every run

  
  
Posted 3 years ago

but I have no idea what's behing 1 , 2 and 3 compare to the first execution

  
  
Posted 3 years ago

And this is when I compare two tasks

  
  
Posted 3 years ago

on one experiment it overlays the same metrics (not taking into account the run number)

  
  
Posted 3 years ago

I assume it is reported into TB, right ?

  
  
Posted 3 years ago

That is what I was hoping at first

  
  
Posted 3 years ago

the previous image was from the dashboard of one experiment

  
  
Posted 3 years ago

Right, I think the naming is a by-product of Hydra / TB

  
  
Posted 3 years ago

but to go back to your question, I think it would make sense to have one task per run to make the comparison on hyper-parameters easier

  
  
Posted 3 years ago

between Hydra, PL, TB and clearml I am not quite sure who is adding the prefix for each run

  
  
Posted 3 years ago

GloriousPanda26 Are you getting multiple Tasks or is it a single Task ?

  
  
Posted 3 years ago

yes. As you can see this one has the hydra section reported in the config

  
  
Posted 3 years ago

So the naming is a by product of the many TB created (one per experiment), if you add different naming ot the TB files, then this is what you'll be seeing in the UI. Make sense ?

  
  
Posted 3 years ago

I see

  
  
Posted 3 years ago

the import order does is not related to the problem

  
  
Posted 3 years ago

GloriousPanda26 wouldn't it make more sense that multi run would create multiple experiments ?

  
  
Posted 3 years ago

but despite the naming it's working quite well actually

  
  
Posted 3 years ago

issue created

  
  
Posted 3 years ago

ClearML does

  
  
Posted 3 years ago

it's a single taks which contains metrics for all 4 executions

  
  
Posted 3 years ago

Thanks GloriousPanda26 !

  
  
Posted 3 years ago

ClearML does

Thanks for doing that ! :i_love_you_hand_sign:

  
  
Posted 3 years ago

I meant the omegaconf

  
  
Posted 3 years ago

yes

  
  
Posted 3 years ago

but I have no idea what's behing 

1

, 

2

  and 

3

 compare to the first execution

This is why I would think multiple experiments, since it will store all the arguments (and I think these arguments are somehow being lost.
wdyt?

  
  
Posted 3 years ago

I am not really familiar with TB internal mechanics. For this project we are using Pytorch Lightning

  
  
Posted 3 years ago

but when I compare experiments the run numbers are taken into account comparing "1:loss" with "1:loss" and putting "2:loss"s in a different graph

  
  
Posted 3 years ago