Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
SourOx12
Moderator
3 Questions, 17 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

17 × Eureka!
0 Votes
17 Answers
1K Views
0 Votes 17 Answers 1K Views
Hello, I have a problem with task.set_initial_iteration(0) in Google Colab. After continuing the experiment, gaps appear on my graph, but if you use Colab. I...
2 years ago
0 Votes
18 Answers
906 Views
0 Votes 18 Answers 906 Views
Hi, I have such a problem, after I restore the experiment from the checkpoint, my scalar metrics have gaps due to the fact that my iterations are not zero. I...
3 years ago
0 Votes
3 Answers
852 Views
0 Votes 3 Answers 852 Views
Hello. Please tell me how to make sure that when you start the pipeline, nothing superfluous is installed in the service queue? @PipelineDecorator.pipeline( ...
one year ago
0 Hi, I Have Such A Problem, After I Restore The Experiment From The Checkpoint, My Scalar Metrics Have Gaps Due To The Fact That My Iterations Are Not Zero. If The Smart Way Is How To Get Rid Of It?

Sorry to answer so late AgitatedDove14
I also thought so and tried this thing:
` !pip install clearml
import clearml
id_last_start = '873add629cf44cd5ab2ef383c94b1c'

clearml.Task.set_credentials(...)
if id_last_start != '':

task = clearml.Task.get_task(task_id=id_last_start,project_name='tests', task_name='patience: 5 factor:0.5')

task = clearml.Task.init(project_name='Exp with ROP',
                         task_name='patience: 2 factor:0.75',
                         co...
3 years ago
0 Hi, I Have Such A Problem, After I Restore The Experiment From The Checkpoint, My Scalar Metrics Have Gaps Due To The Fact That My Iterations Are Not Zero. If The Smart Way Is How To Get Rid Of It?

AgitatedDove14
Can you please give some code examples where the training restore, because I haven't found any? I will be very grateful

3 years ago
0 Hello, I Have A Problem With Task.Set_Initial_Iteration(0) In Google Colab. After Continuing The Experiment, Gaps Appear On My Graph, But If You Use Colab. I Tried It On My Computer And Everything Is Normal There.

AgitatedDove14 Of course, I added it when restoring the experiment. And it works correctly when running on my computer, and if I use colab, then for some reason it has no effect.

2 years ago
0 Hello, I Have A Problem With Task.Set_Initial_Iteration(0) In Google Colab. After Continuing The Experiment, Gaps Appear On My Graph, But If You Use Colab. I Tried It On My Computer And Everything Is Normal There.

AgitatedDove14
Yes, I have problems with continuing experiments in colab. I do everything the same as on my computer, but in the case of colab, I have gaps in the charts.

2 years ago
0 Hello, I Have A Problem With Task.Set_Initial_Iteration(0) In Google Colab. After Continuing The Experiment, Gaps Appear On My Graph, But If You Use Colab. I Tried It On My Computer And Everything Is Normal There.

When I work through Colab, when I continue experimenting, I get gaps in the graphs.
For example, the first time I run, I create a task and run a loop:
for i in range(1,100): clearml.Logger.current_logger().report_scalar("test", "loss", iteration=i, value=i)

Then, on the second run, I continue the task via continue_last_task and reuse_last_task_id and write task.set_initial_iteration(0). Then I start the cycle:
for i in range(100,200):
` clearml.Logger.current_logger()...

2 years ago
3 years ago
0 Hello, I Have A Problem With Task.Set_Initial_Iteration(0) In Google Colab. After Continuing The Experiment, Gaps Appear On My Graph, But If You Use Colab. I Tried It On My Computer And Everything Is Normal There.

AgitatedDove14
I upload to colab via “pip install clearml”, and therefore it is probably the most up-to-date there. The version on my computer and colab is 1.1.4

2 years ago
0 Hi, I Have Such A Problem, After I Restore The Experiment From The Checkpoint, My Scalar Metrics Have Gaps Due To The Fact That My Iterations Are Not Zero. If The Smart Way Is How To Get Rid Of It?

Hi AgitatedDove14 I finally found a solution to the problem. I should have written task.set_initial_iteration(0) after restore task. Thank you for your help

3 years ago
0 Hi, I Have Such A Problem, After I Restore The Experiment From The Checkpoint, My Scalar Metrics Have Gaps Due To The Fact That My Iterations Are Not Zero. If The Smart Way Is How To Get Rid Of It?

AgitatedDove14
Yes, i use continue_last_task with reuse_last_task_id. The iteration number is the actual number of batches that were used, or the number of the epoch at which the training stopped. The iterations are served sequentially, but for some reason there is a gap in this picture

3 years ago
0 Hello. Please Tell Me How To Make Sure That When You Start The Pipeline, Nothing Superfluous Is Installed In The Service Queue?

For example, when I start the pipeline, pytorch starts to be installed in the service queue. But I would like it to be installed only inside the queue that train step will run on.

one year ago