Hi CostlyOstrich36
What I'm seeing is expected behavior:
In my toy example, I have a VAE which is defined by a YAML config file and parsed with PytorchLightning CLI. Part of the config defines the latent dimension (n_latents) and the number of input channels of the decoder (in_channels). These two values needs to be the same. When I just use the Lightning CLI, I can use variable interpolation with OmegaConf like this:class_path: mymodel.VAE init_args: {...} bottleneck: class_path: mymodel.Bottleneck init_args: in_channels: ${init_args.encoder.init_args.out_channels} n_latents: 256 decoder: class_path: mymodel.Decoder init_args: in_channels: ${init_args.bottleneck.init_args.n_latents} {...}
The trouble is that the variables are already inserted when ClearML updates the associated Task for training the VAE.
In the base-task for my optimization I then see this in the UI (Configuration/Hyper parameters):Args/fit.model.init_args.bottleneck.init_args.n_latents: 256
Args/fit.model.init_args.decoder.
http://init_args.in _channels: 256
which is as expected.
When I then setup a hyperparameter optimization job and I would like to modify n_latents
of my bottleneck, the number of input channels of the decoder has to be changed to the same values that was sampled for n_latents and thats my issue 🙂
Was that more clear (albeit longer)?
Edit: I have played around with a LinkedParameter, which held both a main name and linked_arg and was subclassed from clearml.automation.parameters.Parameter, but the parameters seem to be simple placeholders for the optimizer classes (e.g. in _convert_hyper_parameters_to_optuna
in clearml.automation.optuna
)