Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
GreasyLeopard35
Moderator
3 Questions, 15 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

15 × Eureka!
0 Votes
15 Answers
509 Views
0 Votes 15 Answers 509 Views
Hi everyone! Is anybody using log-scale parameter ranges for hyper-parameter optimization? It seems that there is a bug in the hpbandster module. I'm getting...
one year ago
0 Votes
8 Answers
598 Views
0 Votes 8 Answers 598 Views
Hi there! I'm getting an error whenever trying to queue experiments using conda package manager that require python>=3.10. Locally (non-queued) the training ...
one year ago
0 Votes
5 Answers
541 Views
0 Votes 5 Answers 541 Views
Not able to resume a hyper-parameter optmization. When I try to resume a stopped or aborted parameter optimization experiment, it will fail with the error --...
one year ago
0 Not Able To Resume A Hyper-Parameter Optmization.

Hi,
thanks for the prompt reply, AgitatedDove14 . Here are some more details:

I am executing locally (i.e. I set args['run_as_service'] = False as in https://github.com/allegroai/clearml/blob/400c6ec103d9f2193694c54d7491bb1a74bbe8e8/examples/optimization/hyper-parameter-optimization/hyper_parameter_optimizer.py#L45 ). Everything was fine until some network issues occurred and my task was aborted, When I restart it, I see these double configurations in the UI.

However, I've just noticed th...

one year ago
0 Not Able To Resume A Hyper-Parameter Optmization.

It isn't reproducible. I had a stupid typo in my script parsing the arguments twice. Thanks anyways, you got me on the right track! :)

one year ago
0 Hi Everyone! Is Anybody Using Log-Scale Parameter Ranges For Hyper-Parameter Optimization? It Seems That There Is A Bug In The Hpbandster Module. I'M Getting Negative Learning Rates..

This code snipet produces numbers in the range from 10 to 1000 instead of [10^-3, 10]. This could be fixed by changing https://github.com/allegroai/clearml/blob/master/clearml/automation/parameters.py#L168 :

Now:
values = [v*step_size for v in range(0, int(steps))]
Should be:
values = [self.min_value + v * step_size for v in range(0, int(steps))]

I've tested it locally and it behaves as expected. Also, it would allow for negative values which aren't supported at the moment.

one year ago
0 Hi Everyone! Is Anybody Using Log-Scale Parameter Ranges For Hyper-Parameter Optimization? It Seems That There Is A Bug In The Hpbandster Module. I'M Getting Negative Learning Rates..

Moreover, the LogUniformParameterRange is not implemented for hpbanster optimizer and results in a range from values [-3, 1] since LogUniformParameterRange inherits from UniformParameterRange. See https://github.com/allegroai/clearml/blob/master/clearml/automation/hpbandster/bandster.py#L355

one year ago
0 Hi Everyone! Is Anybody Using Log-Scale Parameter Ranges For Hyper-Parameter Optimization? It Seems That There Is A Bug In The Hpbandster Module. I'M Getting Negative Learning Rates..

from clearml.automation.parameters import LogUniformParameterRange
sampler = LogUniformParameterRange(name='test', min_value=-3.0, max_value=1.0, step_size=0.5)
http://sampler.to _list()

one year ago
0 Hi Everyone! Is Anybody Using Log-Scale Parameter Ranges For Hyper-Parameter Optimization? It Seems That There Is A Bug In The Hpbandster Module. I'M Getting Negative Learning Rates..

Hi AgitatedDove14 ,
The get_value() method works fine. The issue is in to_list(), which calls super().to_list(), which in turn returns a list starting at 0 (thus only positive values). My suggested modification to http://UniformParameterRange.to _list() would return a list starting at self.min_value (which could be negative) instead.

one year ago
0 Hi Everyone! Is Anybody Using Log-Scale Parameter Ranges For Hyper-Parameter Optimization? It Seems That There Is A Bug In The Hpbandster Module. I'M Getting Negative Learning Rates..

Look here AgitatedDove14 :
https://github.com/allegroai/clearml/blob/master/clearml/automation/hpbandster/bandster.py#L356

There is no implementation for LogUniformParameterRange, but since it is an instance of UniformParameterRange (by inheritance), this method will return values between [-3, .., 1] for my example. It should either raise an Exception or return [0.001, ..., 1].

one year ago
0 Hi There! I'M Getting An Error Whenever Trying To Queue Experiments Using Conda Package Manager That Require Python>=3.10. Locally (Non-Queued) The Training Runs Just Fine. In The Ui I See The The Following Console Output:

Python 3.9 runs fine, but there's an issue with the pytorch datatloaders that seems to be related to that python version. Clearml version is 1.6.2 and the agents are 1.3.0.

one year ago