Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
CurvedHedgehog15
Moderator
8 Questions, 28 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

26 × Eureka!
0 Votes
2 Answers
533 Views
0 Votes 2 Answers 533 Views
Hello, I would like to ensure, the comparison of hyperparameters is supposed to work as it is depicted in the screenshot? Following the example, I would supp...
one year ago
0 Votes
5 Answers
685 Views
0 Votes 5 Answers 685 Views
Hello, Can I get somehow JSON files of plots for the given Task? I know there is the "Download JSON" button near the plots in your web UI, but I need do it p...
2 years ago
0 Votes
3 Answers
586 Views
0 Votes 3 Answers 586 Views
one year ago
0 Votes
7 Answers
644 Views
0 Votes 7 Answers 644 Views
Hello again, I would like to ask you if something like this is possible in ClearML (see screenshot)? For each experiment ( t01 , t02 , etc.) I am able to rep...
one year ago
0 Votes
24 Answers
519 Views
0 Votes 24 Answers 519 Views
Hello, I would like to optimize hparams saved in Configuration objects. I used Hydra and OmegaConf for hparams definition (see img). How should I define the ...
2 years ago
0 Votes
5 Answers
577 Views
0 Votes 5 Answers 577 Views
2 years ago
0 Votes
2 Answers
556 Views
0 Votes 2 Answers 556 Views
one year ago
0 Votes
7 Answers
550 Views
0 Votes 7 Answers 550 Views
Hi all, any idea why spawned trainings during optimization can end with the following message User aborted: stopping task (3) ? I (user) do not stop them int...
2 years ago
0 Hello, I Would Like To Ensure, The Comparison Of Hyperparameters Is Supposed To Work As It Is Depicted In The Screenshot? Following The Example, I Would Suppose To Mark In Red The Change In

Hi, CostlyOstrich36 , thank you for your response.
I realised my issue happens when I compare hyperparameters connected by task.connect_configuration . I compared them in DETAILS section (see screenshot)
When I connect them using task.connect I am able to compare them in HYPER PARAMETERS section, which works as I supposed.
So issue solved 🙂

one year ago
0 Hi All, Any Idea Why Spawned Trainings During Optimization Can End With The Following Message

Oh, damn, you're right CostlyOstrich36 , this make sense. And really AgitatedDove14 if I look at the objective, it seems that tasks with the objective far from the base task are aborted. Thank you very much guys.

2 years ago
0 Hello, I Would Like To Optimize Hparams Saved In Configuration Objects. I Used Hydra And Omegaconf For Hparams Definition (See Img). How Should I Define The Name Of Hparam In

CostlyOstrich36 dict containing OmegaConf dict {'OmegaConf': "task_name: age\njira_task: IMAGE-2536\n ...} But the better option is get_configuration_object_as_dict("Hyperparameters") I think.

2 years ago
0 Hi, I Am Using Pytorch Lightning In Combination With Clearml. I Would Like To Report Figure In Clearml Debug Samples. I Use `Self.Logger.Experiment.Add_Figure("Test/Name", Fig, Global_Step=Self.Global_Step)`, Where Self Is Module. Indeed, The Figure Ends

CostlyOstrich36 we have quite difficult structure of code, so I can't just copy and paste, I would have to make some dummy code snippet. If you have already some, please send it and I can just fill it with the logging function.

one year ago
0 Hello Again, I Would Like To Ask You If Something Like This Is Possible In Clearml (See Screenshot)? For Each Experiment (

Hi ExasperatedCrab78 , thank you for your response! I am not sure if I understand you right, can you provide some dummy example, please?
What we already tried is reporting scalar for individual FAR levels, i.e. 0.001, 0.002, 0.01, etc. But this is not really good for us as we loose overall view on the performance by comparing multiple scalars on separate places. 😞

one year ago
0 Hello Again, I Would Like To Ask You If Something Like This Is Possible In Clearml (See Screenshot)? For Each Experiment (

Thank you ExasperatedCrab78 for your reply, as you said, I still miss the overview I am looking for, so I made an issue as you suggest 🙂
https://github.com/allegroai/clearml/issues/760#issue-1355778280

one year ago
one year ago
0 Hi, I Have Some Questions About Hyperparameter Optimization. We Have A Setup Where We Use Pytorchlightning Cli With Clearml For Experiment Tracking And Hyperparameter Optimization. Now, All Our Configurations Are Config-File Based. Sometime We Have Linke

Oh, sorry, I wrongly understand your issue. 😞
But it is the interesting one!
What comes to my mind, that https://clear.ml/docs/latest/docs/references/sdk/hpo_parameters_parameterset#class-automationparameterset can be required when there is a link between two variables. But I have never tested it and I am not ClearML developer, so do not take this advice too seriously. 🙂 Hoperfully, someone more ClearML experienced will respond you.

one year ago
0 Hello, I Would Like To Optimize Hparams Saved In Configuration Objects. I Used Hydra And Omegaconf For Hparams Definition (See Img). How Should I Define The Name Of Hparam In

Regarding the get_configuration_objects() I realized I query config for the optimization task, not for the base task, sorry. But the former question about the hparam name setting is still interesting!

2 years ago
0 Hello, I Would Like To Optimize Hparams Saved In Configuration Objects. I Used Hydra And Omegaconf For Hparams Definition (See Img). How Should I Define The Name Of Hparam In

AgitatedDove14 I trained a dummy training with task.connect to have the hparams in Hyperparameters section, I ran the HyperParameterOptimizer and really the "hpo_params/hparam" is updated, however, the training failes ( solver is child folder with various configs in my repo):

File "/root/.clearml/venvs-builds/3.6/code/train.py", line 17, in <module> from solver.config.utils import ( ModuleNotFoundError: No module named 'solver'

2 years ago
0 Hello, I Would Like To Optimize Hparams Saved In Configuration Objects. I Used Hydra And Omegaconf For Hparams Definition (See Img). How Should I Define The Name Of Hparam In

AgitatedDove14 Are hparms saved in hypeparameter section superior to hparams saved in configuration objects?
Regarding to the callback, I am not really sure, how exactly it is meant. I follow the implementation of HyperParameterOptimizer , but I have no idea where can I place such a thing. Can you provide some further explanation, please? Sorry, I am beginner.

2 years ago
0 Hello, I Would Like To Optimize Hparams Saved In Configuration Objects. I Used Hydra And Omegaconf For Hparams Definition (See Img). How Should I Define The Name Of Hparam In

AgitatedDove14 Yes, allowing users to modify the configuration_object would be great 🙂 ... Well, I will try at least copy the OmegaConf object to hyperparameters section and we will see in the moment if the "quickest way" is the workaround

2 years ago
0 Hello, I Would Like To Optimize Hparams Saved In Configuration Objects. I Used Hydra And Omegaconf For Hparams Definition (See Img). How Should I Define The Name Of Hparam In

AgitatedDove14 I figured out the problem. I ran the dummy training (Task.init, training script, logging to ClearML) just "locally", thus the optimization task did not know the desired environment (Git repo, docker, etc.). I had to submit the task using clearml-task and then the optimization tasks did not fail.

2 years ago
0 Hello, I Would Like To Optimize Hparams Saved In Configuration Objects. I Used Hydra And Omegaconf For Hparams Definition (See Img). How Should I Define The Name Of Hparam In

AgitatedDove14 Unfortunately, the hyperparameters in configuration object seems to be superior to the hyperparameters in Hyperparameter section, at least in my case. Probably I will try to get rid of OmegaConf configuration, copy it to the Hyperparameter section and we will see

2 years ago
0 Hello, I Would Like To Optimize Hparams Saved In Configuration Objects. I Used Hydra And Omegaconf For Hparams Definition (See Img). How Should I Define The Name Of Hparam In

I actually think I do this. Purpose of normalize_and_flat_config is just to take hparams DictConfig with possibly nested structure and flatten it to dict with direct key value dict. For instance:
{ 'model' : { 'class' : Resnet, 'input_size' : [112, 112, 3] } }is simplified to
{ 'model.class' : Resnet, 'model.input_size' : [112, 112, 3] }so in my case normalize_and_flat_config(hparams) is actually your overrides . And as you suggest I tried to remove ` Task.connect_c...

2 years ago
0 Hello, I Would Like To Optimize Hparams Saved In Configuration Objects. I Used Hydra And Omegaconf For Hparams Definition (See Img). How Should I Define The Name Of Hparam In

I think the update back the hparam can be the solution for me. Just to be sure if we mean the same:
` @hydra.main(config_path="solver/config", config_name="config")
def train(hparams: DictConfig):
task = Task.init(hparams.task_name, hparams.tag)
overrides = {'my_param': hparams.value} # dict
task.connect(overrides, name='overrides')
<update hparams according to overrides and use it further in the code>

overrides are changed because of optimization, thus hparams will be also changed `

2 years ago
0 Hello, I Would Like To Optimize Hparams Saved In Configuration Objects. I Used Hydra And Omegaconf For Hparams Definition (See Img). How Should I Define The Name Of Hparam In

Yes, it is logged using some default name "OmegaConf". But still the hparams are taken from here, not from the Hyperparameters section

2 years ago
0 Hello, I Have Two Experiments Having The Same Plot With The Same X Values. When I Compare These Two Experiments, The Plots Are Drawn Next To Each Other (See Figure), But I Would Appreciate To See The Y-Values Of The Experiments Just In One Plot. The Plot

Hi AgitatedDove14 , thank you for your response
Yes, firstly I was thinking about the option 2, but then I saw one case in our experiments where the ui merges the plots just as we want and I was wondering if there is some simple way to do it in the case of all plots. In my opinion, for our use case option 1 is also fine - how can I combine two plots in the ui as you mentioned?

2 years ago
0 Hello, I Would Like To Optimize Hparams Saved In Configuration Objects. I Used Hydra And Omegaconf For Hparams Definition (See Img). How Should I Define The Name Of Hparam In

AgitatedDove14 Hmm, every training is run by bash script calling train.py which looks something like this:
` @hydra.main(config_path="solver/config", config_name="config")
def train(hparams: DictConfig):
"""
Run training pytorch-lightning model
"""
# Set process title
setproctitle.setproctitle(f"{hparams.tag}-{get_user_name()}")

try:
    # Init ClearmlTask and connect configuration
    task = Task.init(hparams.task_name, hparams.tag)
    task.conne...
2 years ago
0 Hello, I Would Like To Optimize Hparams Saved In Configuration Objects. I Used Hydra And Omegaconf For Hparams Definition (See Img). How Should I Define The Name Of Hparam In

Hello again, AgitatedDove14 and others. I write you to let you know, what works for us in the case of optimizing hparams in DictConfig:
` hparams_dict = OmegaConf.to_object(hparams)

update hparams_dict using new hyperparameters set by the optimizer

hparams_dict = task.connect(hparams_dict, name="HPO")

ProxyDictPostWrite to dict

hparams_dict = hparams_dict._to_dict()

update hparams DictConfig which is used later in the training

hparams = OmegaConf.create(hparams_dict)

train(hparams...

2 years ago
0 Hi All, Any Idea Why Spawned Trainings During Optimization Can End With The Following Message

AgitatedDove14 so if I undestand it correctly, the parameters such as time_limit_per_job, max_iteration_per_job, etc. can be surpassed by internal processes in Optuna and so on, right? I observe this behaviour also in the case of RandomSearch, does it stop the experiments either? And as I wrote, the first two spawned tasks were aborted using this message, this is weird, isn't it? I mean that HPO stops the tasks with early stop even though no previous tasks (benchmarks) are known.

2 years ago