Reputation
Badges 1
195 × Eureka!for me at the moment it means "manually" filtering the keys I've put in for the HP space. I find it a bit strange that they are not saved as part of the optimizer object..
the optimizer_task seem to have an attribute called hyper_parameters but its empty in my case..
that is the heaviest part for me
AgitatedDove14 , in my use case, the strings are regular expressions. If the reported regular expression string changes when reported, it messes up my run.
I can't define the "legal regular expression space", off the top of my head. The expected behavior is reported_string == original_string..
there seem to be an additional logic to:str_value = str(value)
str('.') -> '\.'
however in the configuration I see: '.' for non nested strings
AgitatedDove14 thanks, I actually experimented with similar parallel pool approach but the overhead seem to even out the benefit..
is there something you can think of for the first part though? pulling all the experiments get_top_experiments()
optimizer.get_top_experiments(n)
That's good enough for me, I forgot about the all projects option
changing the queue order cool, but a bit too limited.. I have 30 jobs I want to multi select and push up to first priority.. this is a lot of manual labor..
I am getting the following error when I try the above:
` /opt/conda/envs/torch_38/lib/python3.8/site-packages/clearml/backend_api/session/client/client.py in new_func(self, *args, **kwargs)
374 @wrap
375 def new_func(self, *args, **kwargs):
--> 376 return Response(self.session.send(request_cls(*args, **kwargs)))
377
378 new_func.name = new_func.qualname = action
TypeError: init() missing 1 required positional argument: 'items' `
ohh actually I think I remember, when you connect a dictionary, the local dtype is used for the casting of the remote matching key (probably more nuanced)
AgitatedDove14 , what I meant by manually filtering, at the moment, to combine the information of metric values + HP point, I pull all the parameters, and then manually filter on the HP keys (manually=I have to plug them in, they are not part of optimizer object)
it seem to be orders of magnitude faster!
It would be very very useful for my use case, and I believe a relatively popular use case in general for example when using regular expression configurations
if I can't "pull", execute, report tasks from the same persistent python script it doesn't solve the problem of avoiding rerunning some heavy setup for a lightweight trial
FrothyDog40 , is submitting an issue still relevant?
the solution you suggested works for the single machine case. The missing part is being able to access and "claim" spawn trials (samples in the HP plane), from multiple machines
it is missing the status that I'm looking for, namely is this worker is running a task or not
I want a manual way to access a global optimizer from multiple machines, it can be an agent, however the critical part is that machine will be able to pull and report multiple trials without restarting
The difference is that I want a single persistent machine, with a single persistent python script that can pull execute and report multiple tasks
AgitatedDove14 , I see, someone must have faced the issue of dumping regular expression strings in tuple before?
I'll have a think and a look too, unfortunately not today
Bonus 2: having a way to push experiments up in the queue
if preferred I can open a Github issue about this
let me try to explain myself again
something like in the example I shared<Machine 1> #Init Optimizer <Machine 2> **heavy one time Common Initialization** while True: #sample Optimizer # init task # Execute Something # report results <Machine i> **heavy one time Common Initialization** while True: #sample **same** Optimizer # init task # Execute Something # report results
FrothyDog40 , done 🙂
https://github.com/allegroai/clearml/issues/474
AgitatedDove14 , for creating a dedicated function I would suggest also including the actual sampled point in the HP space. This would be the most common use case, and essentially the reason for running the HPO understanding the sensitivity of metrics with respect to hyper-parameters