Alas no, apologies. Are you saying that in the global_min case, if a trial returns an MAE of 1.3, but the previous trial got an MAE of 0.5, the optimiser gets told that the MAE of the latest model is 0.5 instead of the truth?
Bugs, definitely GitHub, this is the easiest to track.
Documentation, if these are small issues, Slack is fine, otherwise, GitHub issue.
Regrading the documentation, we are working on another iteration of improvement, but if you find inaccuracies/broken links please report 🙂
Phew! If I find any bugs or potential issues in the doco/comments too, where would be the best place to send that so I dont spam slack if I find tiny issues. Github issues / DM / a specific slack channel?
Hmmm:
WOOT WOOT we broke the record! Objective reached 17.071016994817196
WOOT WOOT we broke the record! Objective reached 17.14302934610711
These two seems strange, let me look into it
Found it, definitely a bug in the callback, it has not effect on the HPO process itself
Hi LudicrousParrot69
I guess you are right this is not trivial distinction:
min: means we are looking for the the minimum value of a specific scalar. meaning 1.0, 0.5, 1.3 -> the optimizer will get these direct values and will optimize based on that
global min: means the optimizer is getting the minimum values of the specific scalar. With the same example: 1.0, 0.5, 1.3 -> the HPO optimizer gets 1.0, 0.5, 0.5
The same holds for max/global_max , make sense ?
Correct, which makes sense if you have a stochastic process and you are looking for the best model snapshot. That said I guess the default use case would be min/max (and not the global variant)
Okay, that makes sense then. Whats still got me scratching my head is the examples printing out the WOOT WOOT for breaking the record, despite it clearly not. Hmm.