Reputation
Badges 1
20 × Eureka!@<1523701070390366208:profile|CostlyOstrich36>
Sorry for delay, please see below:
AgitatedDove14 thank you!
Do you mean in my train.py, after Task.init(..)?
Thanks, I'll try this workaround.
Hmm, I don't know. I think, I saw this feature in 'manual' HPO code, but probably I'm wrong.
What optimization method did you use?
AFAIK, it's the default one - "Bandit-based Bayesian ..."
Also, what are the values that the cloned experiments get?
As far as I see, there are a few experiments with the same value, for instance 0.3.
I'll try to add a log, and to play with number's format
Thank you @<1523701070390366208:profile|CostlyOstrich36> , but I don't see a report into HPO app experiment, just a few graphs and a Summary table.
Thank you again - found it. I haven't used them until now, I'll learn about them.
There is no code snippet, I created HPO app via UI.
In this case though, there are only 11 possible parameter "combinations". But by default, ClearML sets the maximum amount of jobs much higher than that (check advanced settings in the wizard).
I could understand if it will use the minimum amount of jobs, not maximum 🙂
BTW, I created another HPO app with two parameters. And instead of 11*6=66 jobs I saw 92.
I'll open a bug.
@<1523701118159294464:profile|ExasperatedCrab78> - many thanks!
Thank you again! I read about task.connect in https://clear.ml/docs/latest/docs/clearml_sdk/task_sdk and now I see how I can add that.
CostlyOstrich36 - thank you!
I guess the right word is 'results' - it's list of KPI metrics, that we use for compare our experiments.
Many thanks - using Logger it seems much better!
Now I see my metrics under 'Plots'. But when I compare two experiments, ClearML just shows me two tables. I hoped to see the diff.
Can you add a log of the HPO app? It's in hidden projects (Go to settings and enable viewing hidden projects).
Here is the log of 'hpo-app' app. And it's really seems to be related to float type...
CostlyOstrich36 - thank you, got it!