I set it to 200000
! But the problem stems from when the first plot is the clearml cpu and gpu monitoring, were you able to reproduce it? Even if I set the number fairly large when the monitoring plot was reported the message appeared.
GrievingTurkey78 , what timeout did you set? Please note that it's in seconds so it needs to be a fairly large number
GrievingTurkey78 , let me take a look into it 🙂
GrievingTurkey78 , did you try calling task.set_resource_monitor_iteration_timeout
after the task init?
CostlyOstrich36 That seemed to do the job! No message after the first epoch, with the caveat of losing resource monitoring. Any idea of what could be causing this? If the resource monitor is the first plot then the iteration detection will fail? Are there any hacks to keep the resource monitoring? Thanks a lot! 🙌
GrievingTurkey78 , please try task.init(
auto_resource_monitoring=False, ...
)
Sure! Could you point me out how its done
GrievingTurkey78 , can you try disabling the cpu/gpu detection?
I set the number to a crazy value and it fails around the same iteration
Oh I think I am wrong! Then it must be the clearml monitoring. Still it fails way before the timer ends.
GrievingTurkey78 , I'm not sure. Let me check.
Do you have cpu/gpu tracking through both pytorch lightning AND ClearML reported in your task?
Last question CostlyOstrich36 sorry to poke you! Seems even though if I set an extremely long time it will still fail when the first plots are reported. The first plots are generated automatically by pytorch lightning and track the cpu and gpu usage. Do you think this could be the cause? or should it also detect the iteration.
GrievingTurkey78 , the default is 3 minutes. You can try setting to a long enough time to make sure it doesn't skip the epoch 🙂
Hey CostlyOstrich36 I am doing a lot of things before the first plot is reported! Is the seconds_from_start
parameter unbounded? What should I do if it takes a lot of time to report the first plot?
I'll give that a try! Thanks CostlyOstrich36
GrievingTurkey78 , can it be a heavy calculation that takes time? ClearML has a fallback to time instead of iterations if a certain timeout has passed. You can configure it with task.set_resource_monitor_iteration_timeout(seconds_from_start=<TIME_IN_SECONDS>)
CostlyOstrich36 Pytorch lightning exposes the current_epoch
in the trainer, not sure if that is what you mean.
GrievingTurkey78 , do you have iterations stated explicitly somewhere in the script?