Unanswered
Hi I Came Across Some Inconsistency In The Iteration Reporting In The Clearml With Pytorch-Lightning When Calling Trainer.Fit Multiple Times, Before I Dive In I Wondered If There Is A Known Issue Related To This?
Hi AgitatedDove14 , so it looks something like this:
` Task.init
trainer.fit(model) # clearml logging starts from 0 and logs all summaries correctly according to real count
triggered fit stopping at epoch=n
something
trainer.fit(model) # clearml logging starts from n+n (thats how it seems) for non explicit scalar summaries (debug samples, scalar resources monitoring, and also global iteration count)
triggered fit stopping
... `I am at the moment diverging from this implementation to something else, so personally it wouldn't be an issue for me.. I'm reporting it because it might be useful for someone in the future
181 Views
0
Answers
3 years ago
one year ago