Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hello, I Have A Problem With Task.Set_Initial_Iteration(0) In Google Colab. After Continuing The Experiment, Gaps Appear On My Graph, But If You Use Colab. I Tried It On My Computer And Everything Is Normal There.

Hello, I have a problem with task.set_initial_iteration(0) in Google Colab. After continuing the experiment, gaps appear on my graph, but if you use Colab. I tried it on my computer and everything is normal there.

  
  
Posted 2 years ago
Votes Newest

Answers 17


And it works correctly when running on my computer, and if I use colab, then for some reason it has no effect.

I think I'm lost on this one, when running in colab, is this continuing a previous experiment ?

  
  
Posted 2 years ago

AgitatedDove14
Yes, I have problems with continuing experiments in colab. I do everything the same as on my computer, but in the case of colab, I have gaps in the charts.

  
  
Posted 2 years ago

AgitatedDove14 Hooray, it helped! Thank you very much!!!!

  
  
Posted 2 years ago

Yey! okay let me make sure we add this feature to the Task.init arguments so one can control it from code 🙂

  
  
Posted 2 years ago

When I work through Colab, when I continue experimenting, I get gaps in the graphs.
For example, the first time I run, I create a task and run a loop:
for i in range(1,100): clearml.Logger.current_logger().report_scalar("test", "loss", iteration=i, value=i)

Then, on the second run, I continue the task via continue_last_task and reuse_last_task_id and write task.set_initial_iteration(0). Then I start the cycle:
for i in range(100,200):
clearml.Logger.current_logger().report_scalar("test", "loss", iteration=i, value=i)
And then on the graphs I get a gap.

  
  
Posted 2 years ago

AgitatedDove14
I upload to colab via “pip install clearml”, and therefore it is probably the most up-to-date there. The version on my computer and colab is 1.1.4

  
  
Posted 2 years ago

Hi, AgitatedDove14
Yes, that sounds like my problem. But I do not know how it can help me:(

  
  
Posted 2 years ago

I can't think of any actual difference in flow ...
Can you try the following?
task._setup_reporter() task.set_initial_iteration(0)

  
  
Posted 2 years ago

But I do not know how it can help me:(

In your code itself after the Task.init call add:
task.set_initial_iteration(0)See reply here:
https://github.com/allegroai/clearml/issues/496#issuecomment-980037382

  
  
Posted 2 years ago

Okay I think I know what's going on (there is a race that for some reason on CoLab acts differently).
As a quick hack you can do the following:
Task._report_subprocess_enabled = False task = Task.init(...) task.set_initial_iteration(0)

  
  
Posted 2 years ago

I get gaps in the graphs.
For example, the first time I run, I create a task and run a loop:

Hi SourOx12
Is this related to this one?
https://github.com/allegroai/clearml/issues/496

  
  
Posted 2 years ago

AgitatedDove14
If I use this method, then new scalar values stop being added to the graph

  
  
Posted 2 years ago

AgitatedDove14 Of course, I added it when restoring the experiment. And it works correctly when running on my computer, and if I use colab, then for some reason it has no effect.

  
  
Posted 2 years ago

Can you give a small snippet to play with? Just to understand, when you run on local machine everything works fine? What do you do with Google Colab?

  
  
Posted 2 years ago

Hmm, it seems as if the task.set_initial_iteration(0) is ignored...
What's the clearml version you are using ?
Is it the same one you have on the local machine ?

  
  
Posted 2 years ago

I'm so happy to see that this problem has been finally solved!

  
  
Posted 2 years ago
606 Views
17 Answers
2 years ago
one year ago
Tags