Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hello, Community, I Hope This Message Finds You All Well. I Am Currently Working On A Project Involving Hyperparameter Optimization (Hpo) Using The Optuna Optimizer. Specifically, I'Ve Been Trying To Navigate The Parameters 'Min_Iteration_Per_Job' And 'M

Hello, community,

I hope this message finds you all well. I am currently working on a project involving Hyperparameter Optimization (HPO) using the Optuna optimizer. Specifically, I've been trying to navigate the parameters 'min_iteration_per_job' and 'max_iteration_per_job' which, as I've come to understand, are typically tied to the number of batch iterations.

My conundrum is that my training regimen is structured around epochs rather than batches. To complicate matters, I'm looking to optimize the batch size as part of the HPO process. However, with a shifting batch size, determining a fixed number of batch iterations becomes problematic. For instance, I initially set 'min_iteration_per_job' to 50 and 'max_iteration_per_job' to 250. Despite these settings, I've observed Optuna terminating tasks after only 10 epochs. This behavior led me to believe that Optuna might be interpreting each batch, rather than each epoch, as a single iteration.

I am reaching out to seek advice or solutions from anyone who may have faced a similar challenge. Is there a way to configure Optuna to perform HPO with an emphasis on epochs? Or is there another approach to integrating batch size adjustment as a variable parameter without conflicting with the established iteration parameters?

Any insights or guidance on this matter would be greatly appreciated.

  
  
Posted 8 months ago
Votes Newest

Answers


Hi @<1523703652059975680:profile|ThickKitten19> ! Could you try increasing the max_iteration_per_job and check if that helps? Also, any chance that you are fixing the number of epochs to 10, either through a hyper_parameter e.g. DiscreteParameterRange("General/epochs", values=[10]), or it is simply fixed to 10 when you are calling something like model.fit(epochs=10) ?

  
  
Posted 8 months ago
623 Views
1 Answer
8 months ago
8 months ago
Tags