
Reputation
Badges 1
26 × Eureka!Casting the configuration into a dict does not solve the problem as clearml does not capture the nested aspect of the configuration object. This is how it looks on your example:
This seems to be working:t.connect_configuration(OmegaConf.to_container(conf, resolve=True))
sorry for the delay. ClearML capture the command line arguments but they are hydra parameters (mulitrun, config_dir, config_name, config_path, etc). I append and override some hyper parameters of the model but they are all stored as a string under "overrides".
but to go back to your question, I think it would make sense to have one task per run to make the comparison on hyper-parameters easier
it's a single taks which contains metrics for all 4 executions
ClearML does
Thanks for doing that ! :i_love_you_hand_sign:
I am not really familiar with TB internal mechanics. For this project we are using Pytorch Lightning
the import order does is not related to the problem
but when I compare experiments the run numbers are taken into account comparing "1:loss" with "1:loss" and putting "2:loss"s in a different graph
the previous image was from the dashboard of one experiment
Below is an example with one metric reported using multirun. This is taken from a single experiment result page as all runs feed the same experiment. Unfortunately I have no idea what 1
refers to for example. Is it possible to name each run or to break them into several experiments ?
but despite the naming it's working quite well actually
but I have no idea what's behing 1
, 2
and 3
compare to the first execution
yes. As you can see this one has the hydra
section reported in the config
between Hydra, PL, TB and clearml I am not quite sure who is adding the prefix for each run
on one experiment it overlays the same metrics (not taking into account the run number)