Ok, so I recreated your issue I think. Problem is, HPO was designed to handle more possible combinations of items than is reasonable to test. In this case though, there are only 11 possible parameter "combinations". But by default, ClearML sets the maximum amount of jobs much higher than that (check advanced settings in the wizard).
It seems like HPO doesn't check for duplicate experiments though, so that means it will keep spawning experiments (even though it might have executed the exact same one before) until either its job budget, time budget or iterations budget is reached.
I think this is a bug, or should at least be looked at, @<1523701062857396224:profile|AttractiveShrimp45> Do you mind opening a Github issue for this, so we can track it? 🙂
Below is a screenshot where indeed 2 of the same experiments were spawned