Unfortunately, ClearML HPO does not "know" what is inside the task it is optimizing. It is like that by design, so that you can run HPO with no code changes inside the experiment. That said, this also limits us in not being able to "smartly" optimize.
However, is there a way you could use caching within your code itself? Such as using functools' LRU cache? This is built-in in python and will cache function return values if it's ever called again with the same input arguments.
There also seem to be ways to cache to a local disk folder (check here second answer). You could run it locally first and then e.g. save the cache as a clearml dataset, then pull it for each worker. Or: you could have a central folder somewhere on a network drive that is accessible by all your workers, and have them use the same cache folder?