Hi AbruptWorm50
I was wondering if it possible to specify 'patience' of pruning algorithm?
Any of the kwargs passed to **optimizer_kwargs
will be directly passed to the optuna obejct
https://github.com/allegroai/clearml/blob/2e050cf913e10d4281d0d2e270eea1c7717a19c3/clearml/automation/optimization.py#L1096
It should allow you to control the parameters, no?
Regrading the callback, what exactly do you think to put there?
Is the callback this enough?
https://github.com/allegroai/clearml/blob/9624f2c715df933ff17ed5ae9bf3c0a0b5fd5a0e/examples/optimization/hyper-parameter-optimization/hyper_parameter_optimizer.py#L23
Which file are you referring to? Can you link it?
Interesting I am only now seeing **optimizer_kwargs
it seems that it will fix my problem. Is it too much to ask if you could add an example of how to initiate the optuna object with the kwargs (mainly how to initiate 'trial', 'study', 'objective' arguments) ? 🙂
Thank you for the clarification, everything is clear now 🙂
AbruptWorm50 my apologies I think I mislead you you, yes you can pass geenric arguments to the optimizer class, but specifically for optuna, this is disabled (not sure why)
Specifically to your case, the way it works is:
your code logs to tensorboard, clearml catches the data and moves it to the Task (on clearml-server), optuna optimization is running on another machine, trail valies are maanually updated (i.e. the clearml optimization pulls the Task reported metric from the server and updates optuna, optuna early stopping is called (i.e. trial.should_prune()), if the trial need to be stopped, the clearml-optimization aborts the Task (the one running on a different machine)
Does that make sense ?
Specifically, what would be the part you would want modify?
(Notice again the Optuna process is not actually running on the same machine, even though in reality it can be the same one, this is not the same process, this is how it scales ti multi machines so quickly with clearml-agent)