Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Unanswered
Hi, I Have Some Questions About Hyperparameter Optimization. We Have A Setup Where We Use Pytorchlightning Cli With Clearml For Experiment Tracking And Hyperparameter Optimization. Now, All Our Configurations Are Config-File Based. Sometime We Have Linke


Hi CostlyOstrich36
What I'm seeing is expected behavior:

In my toy example, I have a VAE which is defined by a YAML config file and parsed with PytorchLightning CLI. Part of the config defines the latent dimension (n_latents) and the number of input channels of the decoder (in_channels). These two values needs to be the same. When I just use the Lightning CLI, I can use variable interpolation with OmegaConf like this:
class_path: mymodel.VAE init_args: {...} bottleneck: class_path: mymodel.Bottleneck init_args: in_channels: ${init_args.encoder.init_args.out_channels} n_latents: 256 decoder: class_path: mymodel.Decoder init_args: in_channels: ${init_args.bottleneck.init_args.n_latents} {...}The trouble is that the variables are already inserted when ClearML updates the associated Task for training the VAE.

In the base-task for my optimization I then see this in the UI (Configuration/Hyper parameters):
Args/fit.model.init_args.bottleneck.init_args.n_latents: 256
Args/fit.model.init_args.decoder. http://init_args.in _channels: 256
which is as expected.

When I then setup a hyperparameter optimization job and I would like to modify n_latents of my bottleneck, the number of input channels of the decoder has to be changed to the same values that was sampled for n_latents and thats my issue 🙂

Was that more clear (albeit longer)?

Edit: I have played around with a LinkedParameter, which held both a main name and linked_arg and was subclassed from clearml.automation.parameters.Parameter, but the parameters seem to be simple placeholders for the optimizer classes (e.g. in _convert_hyper_parameters_to_optuna in clearml.automation.optuna )

  
  
Posted 2 years ago
163 Views
0 Answers
2 years ago
one year ago