Can you give a small snippet to play with? Just to understand, when you run on local machine everything works fine? What do you do with Google Colab?
When I work through Colab, when I continue experimenting, I get gaps in the graphs.
For example, the first time I run, I create a task and run a loop:for i in range(1,100):
clearml.Logger.current_logger().report_scalar("test", "loss", iteration=i, value=i)
Then, on the second run, I continue the task via continue_last_task and reuse_last_task_id and write task.set_initial_iteration(0). Then I start the cycle:for i in range(100,200):
clearml.Logger.current_logger().report_scalar("test", "loss", iteration=i, value=i)
And then on the graphs I get a gap.
I get gaps in the graphs.
For example, the first time I run, I create a task and run a loop:
Hi SourOx12
Is this related to this one?
https://github.com/allegroai/clearml/issues/496
But I do not know how it can help me:(
In your code itself after the Task.init
call add:task.set_initial_iteration(0)
See reply here:
https://github.com/allegroai/clearml/issues/496#issuecomment-980037382
And it works correctly when running on my computer, and if I use colab, then for some reason it has no effect.
I think I'm lost on this one, when running in colab, is this continuing a previous experiment ?
Hmm, it seems as if the task.set_initial_iteration(0) is ignored...
What's the clearml version you are using ?
Is it the same one you have on the local machine ?
I can't think of any actual difference in flow ...
Can you try the following?task._setup_reporter() task.set_initial_iteration(0)
Okay I think I know what's going on (there is a race that for some reason on CoLab acts differently).
As a quick hack you can do the following:Task._report_subprocess_enabled = False task = Task.init(...) task.set_initial_iteration(0)
Yey! okay let me make sure we add this feature to the Task.init arguments so one can control it from code 🙂
I'm so happy to see that this problem has been finally solved!