Just to keep you updated, as promised 🙂
we have found the bug and will release a fix asap. for that too i will keep you updated 🙂
Can you maybe provide me an example of how to use the parameterset?
MoodySheep3 , a screenshot would be useful just to understand the structure via UI 🙂
Hi MoodySheep3 ,
Can you please provide screenshots from the experiment - how the configuration looks like
` hyper_task = Task.init(project_name="***",
task_name="hyper-param-tuning",
task_type=Task.TaskTypes.optimizer,
reuse_last_task_id=False)
optimizer = HyperParameterOptimizer(
# specifying the task to be optimized, task must be in system already so it can be cloned
base_task_id=task.id,
# setting the hyper-parameters to optimize
hyper_parameters=[
ParameterSet([{"General/data_module": "", "General/model": "", "General/": True,
"General/model_kwargs/": "***", "General/trainer_kwargs/epochs":100}]),
UniformParameterRange('General/data_module_kwargs/abundance_cutoff', min_value=0.001 , max_value=0.005, step_size=0.001),
UniformIntegerParameterRange('General/data_module_kwargs/batch_size', min_value=2, max_value=16, step_size=2),
UniformIntegerParameterRange('General/model_kwargs/number_of_hidden_layers', min_value=2, max_value=5, step_size=1),
UniformParameterRange('General/trainer_kwargs/default_lr', min_value=0.0001, max_value=0.01),
DiscreteParameterRange('General/model_kwargs/***', ["mean", "max", "add"]),
],
# setting the objective metric we want to maximize/minimize
objective_metric_title='val_loss',
objective_metric_series='val_loss',
objective_metric_sign='min',
# setting optimizer
optimizer_class=OptimizerOptuna,
# configuring optimization parameters
pool_period_min=2,
execution_queue='default',
max_number_of_concurrent_tasks=1,
optimization_time_limit=10.,
compute_time_limit=15,
total_max_jobs=20,
min_iteration_per_job=50,
max_iteration_per_job=150000,
) `
parameter_optimization_space = [{
type = “ParameterSet”
name = null
values = [{
General/data_module = “”
General/model = “”
General/homogeneous = true
General/model_kwargs/pooling_flow = “source_to_target”
General/trainer_kwargs/epochs = 2
}]
},
{
type = “UniformParameterRange”
name = “General/data_module_kwargs/”
min_value = 0.001
max_value = 0.005
step_size = 0.001
include_max = true
},
{
type = “UniformIntegerParameterRange”
name = “General/data_module_kwargs/batch_size”
min_value = 2
max_value = 16
step_size = 2
include_max = true
},
{
type = “UniformIntegerParameterRange”
name = “General/model_kwargs/number_of_hidden_layers”
min_value = 2
max_value = 5
step_size = 1
include_max = true
},
{
type = “UniformParameterRange”
name = “General/trainer_kwargs/default_lr”
min_value = 0.0001
max_value = 0.01
step_size = null
include_max = true
},
{
type = “DiscreteParameterRange”
name = “General/model_kwargs/global_pooling_reduce”
values = [“mean”, “max”, “add”]
}]
hi MoodySheep3
I think that you use ParameterSet the way it is supposed to be 🙂
When I run my examples, I also get this warning - which is weird ! because
This is just a warning, the script continues anyway (and reaches end without issue) Those HP exists - and all the sub tasks corresponding to a given parameters set find them !
Can you please add a screenshot of how the hyper params show in the UI for you?
Concerning how to use ParameterSet :
I first declare the setmy_param_set = ParameterSet([ {'General/batch_size': 32, 'General/epochs': 30}, {'General/batch_size': 64, 'General/epochs': 20}, {'General/batch_size': 128, 'General/epochs': 10} ])
This is a very basic example, it is also possible to use more complex things into the set (see https://clear.ml/docs/latest/docs/references/sdk/hpo_parameters_parameterset/ for UniformParameter Range usage in ParameterSet).
Then i do as you did 🙂
`
optimizer = HyperParameterOptimizer(
# specifying the task to be optimized, task must be in system already so it can be cloned
base_task_id=args['template_task_id'],
# setting the hyper-parameters to optimize
hyper_parameters=[
my_param_set,
], `
If you need something more complete/complex, do not hesitate to ask 🙂 I kept simple here because you already do it fine
ps. concerning the warning, i am going to enquire, and will keep you updated
Also how are you running the HPO? From the examples?
SweetBadger76 CostlyOstrich36 after trying to run the same code (and ignoring the warning) i get a different error:
ValueError: HyperParameter type <class ‘clearml.automation.parameters.ParameterSet’> not supported yet with OptimizerBOHB
I looked at the “OptimizerOptuna” code, (clearml/automation/optuna/optuna.py) and i saw that the paramset is really not suppurted.
Which optimizer support parameterset?
Last (very) little thing : could you please open a Github issue for this irrelevant warning 🙏 ? It makes sense to register on GH those bugs, because our code and releases are hosted there.
Thank you !
http://github.com/allegroai/clearml/issues