Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hpo App Question: My Config Includes 11 Parameter Values (0 - 1, Step 0.1). I'Ll Expect To See 11 Experiments, But I Fact It Was "52 Iterations". What I'M Missing (Last Time I Asked Similar Question, But This Time There Is No Issue With Hpo-App Integratio

HPO app question: my config includes 11 parameter values (0 - 1, step 0.1). I'll expect to see 11 experiments, but I fact it was "52 iterations". What I'm missing (last time I asked similar question, but this time there is no issue with hpo-app integration - it gets metrics from its sub-tasks)?
I don't have iterations in my code, so I use hardcoded 'iteration' value:
clearml_logger.report_scalar("my_metric", "Test", iteration=1, value=my_metric)

  
  
Posted 8 months ago
Votes Newest

Answers 10


@<1523701070390366208:profile|CostlyOstrich36>
Sorry for delay, please see below:
image
image
image

  
  
Posted 8 months ago

Thank you, here it is:
image
image

  
  
Posted 8 months ago

In the meantime, it might help to limit the amount of jobs using the advanced settings. If you know the exact amount and want to do every one for sure, just set it that way 🙂

  
  
Posted 8 months ago

Hi @<1523701062857396224:profile|AttractiveShrimp45> , can you please share some screenshots of what you see and also share a code snippet of what reproduces this behavior?

  
  
Posted 8 months ago

Hi @<1523701062857396224:profile|AttractiveShrimp45> , I'm checking your issue myself. Do you see any duplicate experiments in the summary table?

  
  
Posted 8 months ago

Ok, so I recreated your issue I think. Problem is, HPO was designed to handle more possible combinations of items than is reasonable to test. In this case though, there are only 11 possible parameter "combinations". But by default, ClearML sets the maximum amount of jobs much higher than that (check advanced settings in the wizard).

It seems like HPO doesn't check for duplicate experiments though, so that means it will keep spawning experiments (even though it might have executed the exact same one before) until either its job budget, time budget or iterations budget is reached.

I think this is a bug, or should at least be looked at, @<1523701062857396224:profile|AttractiveShrimp45> Do you mind opening a Github issue for this, so we can track it? 🙂

Below is a screenshot where indeed 2 of the same experiments were spawned
image

  
  
Posted 8 months ago

In this case though, there are only 11 possible parameter "combinations". But by default, ClearML sets the maximum amount of jobs much higher than that (check advanced settings in the wizard).

I could understand if it will use the minimum amount of jobs, not maximum 🙂
BTW, I created another HPO app with two parameters. And instead of 11*6=66 jobs I saw 92.
I'll open a bug.
@<1523701118159294464:profile|ExasperatedCrab78> - many thanks!

  
  
Posted 8 months ago

There is no code snippet, I created HPO app via UI.

  
  
Posted 8 months ago

Thanks, I'll try this workaround.

  
  
Posted 8 months ago

What is your configuration?

  
  
Posted 8 months ago
276 Views
10 Answers
8 months ago
7 months ago
Tags