Thanks ! Yeah, I finally figured it out by looking a bit more at the source code of the clearml package
For example, when I use this set of hyperparameters:
DiscreteParameterRange('General/activation', values=["relu", "sigmoid"]),
UniformParameterRange('General/dropout_prob', min_value=0.1, max_value=0.2, step_size=0.1),
UniformIntegerParameterRange('General/n_epochs', min_value=5, max_value=15, step_size=5),
and set the total_max_jobs
to 223=12, some combinations are not tried and some other are tried multiple times
Also in the source code of the Python module ( clearml/automation/optuna/optuna.py
) it looks like the behavior of randomly choosing a value between the specified limits is hardcoded. Is there a way to use a smarter/another sampler, like GridSampler for example to try every possible combination?
Hi @<1555000557775622144:profile|CharmingSealion31> ! When creating the HyperParameterOptimizer
, pass the argument optuna_sampler=YOUR_SAMPLER
.