Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hello Everyone. Im Using Clearml Hpo And Having A Problem With An

Hello everyone. Im using ClearML HPO and having a problem with an OptimizerOptune , the hparams trials start to repeat the best options after some amount of tries. I know there is a problem with Optina itself but since the issue is still open i wonder how it can be fixed. More in thread

  
  
Posted one year ago
Votes Newest

Answers 10


So i have a HPO pipeline like this, many modules to be optimized.
image

  
  
Posted one year ago

And after some time i get a picture like this, where same hparas are trained.
image

  
  
Posted one year ago

My thoughts on fix to to add code in each training script which will get parent's HPO artifact table and look for same hparams, if exists, abort task. This will fix wasted compute issue, but i wonder if it can be done better. Like spending this compute on other hparams, that otherwise will be left untried

  
  
Posted one year ago

Also, a totally separate issue, i wonder if there is an early stopping, when its obvious that suggested hparams are suboptimal, couldn't find anything in docs. I know there is a max_iteration_per_job but couldn't understand its usability from docs either.

  
  
Posted one year ago

Hi @<1623491856241266688:profile|TenseCrab59> , can you elaborate on what do you mean spending this compute on other hprams? I think you could in theory check if a previous artifact file is located then you could also change the parameters & task name from within the code

  
  
Posted one year ago

@<1523701070390366208:profile|CostlyOstrich36> I mean that Optuna suggests {x=10, y=20} for example. Then it becomes next best result in HPO process, then Optuna tends to suggest the very same hparameters, while the parameters space hasn't been fully explored. If i cancel trials with same hparams, it more likely that major part of defined total_max_jobs will be cancelled, thus it renders this parameter hardly usable

  
  
Posted one year ago

I understand. In that case you could implement some code to check if the same parameters were used before and then 'switch' to different parameters that haven't been checked yet. I think it's a bit 'hacky' so I would suggest waiting for a fix from Optuna

  
  
Posted one year ago

Thanks, and by the way can you say anything about early stopping? i asked about it here . I guess it also can only be done through 'hacky' solutions?

  
  
Posted one year ago

In the HPO application I see the following explanation:

'Maximum iterations per experiment after which it will be stopped. Iterations are based on the experiments' own reporting (for example, if experiments report every epoch, then iterations=epochs)'

  
  
Posted one year ago

Well, that just didn't work for me, i set it to 1, and experiments run full time anyway)

  
  
Posted one year ago
785 Views
10 Answers
one year ago
one year ago
Tags