Not sure why that is related to saving images
When the script is hung at the end the experiment says failed in ClearML
What happens if you're running the reporting example from the ClearML github repository?
Thanks @<1719524641879363584:profile|ThankfulClams64> having a code that can reproduce it is exactly what we need.
One thing I might have missed and is very important , what is your tensorboard package version?
Does any exit code appear? What is the status message and status reason in the 'INFO' section?
Hi @<1719524641879363584:profile|ThankfulClams64> ,the logging is done by a separate process, I'm pretty sure it's not terminating all of the sudden. Did you manage to get a full log of such an experiment to share?
I'll update my clearml version. Unfortunately I do not have a small code snippet and it is not always repeatable. Is there some additional logging that can be turned on?
Thank you @<1719524641879363584:profile|ThankfulClams64> for opening the GI, hopefully we will be able to reproduce it and fox ot quickly
Hi @<1719524641879363584:profile|ThankfulClams64> , does the experiment itself show on the ClearML UI?
That makes sense... If you turn auto_connect_streams to false this mean that auto reporting will be disabled as per the documentation.. If you turn it to True then logging should resume.
Hi @<1719524641879363584:profile|ThankfulClams64> ! What tensorflow/keras version are you using? I noticed that in the TensorBoardImage
you are using tf.Summary
which no longer exists since tensorflow 2.2.3
, which I believe is too old to work with tesorboard==2.16.2.
Also, how are you stopping and starting the experiments? When starting an experiment, are you resuming training? In that case, you might want to consider setting the initial iteration to the last iteration your program reported
Okay I will do another run to capture the console output. We currently set auto_connect_streams to False to reduce the number of API calls. So there isn't really anything in the ClearML task page console section
Not sure if this is helpful but this is what I get when I cntrl-c out of the hung script
^C^CException ignored in atexit callback: <bound method Reporter._handle_program_exit of <clearml.backend_interface.metrics.reporter.Reporter object at 0x70fd8b7ff1c0>>
Event reporting sub-process lost, switching to thread based reporting
Traceback (most recent call last):
File "/home/richard/.virtualenvs/temp_clearml/lib/python3.10/site-packages/clearml/backend_interface/metrics/reporter.py", line 317, in _handle_program_exit
self.wait_for_events()
File "/home/richard/.virtualenvs/temp_clearml/lib/python3.10/site-packages/clearml/backend_interface/metrics/reporter.py", line 337, in wait_for_events
return report_service.wait_for_events(timeout=timeout)
File "/home/richard/.virtualenvs/temp_clearml/lib/python3.10/site-packages/clearml/backend_interface/metrics/reporter.py", line 129, in wait_for_events
if self._empty_state_event.wait(timeout=1.0):
File "/home/richard/.virtualenvs/temp_clearml/lib/python3.10/site-packages/clearml/utilities/process/mp.py", line 445, in wait
return self._event.wait(timeout=timeout)
File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 349, in wait
self._cond.wait(timeout)
File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 261, in wait
return self._wait_semaphore.acquire(True, timeout)
KeyboardInterrupt:
Yea I am fine not having the console logging. My issues is the scalers and debug images occasionally don't record to ClearML
Any chance you have some uncommited code changes that, when not included, this works fine?
Is there someway to kill all connections of a machine to the ClearML server this does seem to be related to restarting a task / running a new task quickly after a task fails or is aborted
I'm not sure if it still reports logs. But it will continue running on the machine
Correct, so I get something like this
ClearML Task: created new task id=6ec57dcb007545aebc4ec51eb5b34c67
======> WARNING! Git diff too large to store (2536kb), skipping uncommitted changes <======
ClearML results page:
but that is all
STATUS MESSAGE: N/A
STATUS REASON: Signal None
I found that setting store_uncommitted_code_diff: false
instead of true seems to fix the issue
I will try with clearml==1.16.3rc2 and see if it still has the issue
The console logging still works. Aborting the task was in the log but did not work and the process continued until I killed it.
It seems similar to this None is it possible saving too many model weights causes metric logging thread to die?
task.connect(model_config)
task.connect(DataAugConfig)
If these are separate dictionaries , you should probably use two sections:
task.connect(model_config, name="model config")
task.connect(DataAugConfig, name="data aug")
It is still getting stuck.
I notice that one of the scalars that gets logged early is logging the epoch while the remaining scalars seem to be iterations because the iteration value is 1355 instead of 26
wait so you are seeing Some scalars ?
while the remaining scalars seem to be iterations because the iteration value is 1355 instead of 26
what are you seeing in your TB?
Hi @<1719524641879363584:profile|ThankfulClams64>
I am using ClearML Pro and pretty regularly I will restart an experiment and nothing will get logged to ClearML.
I use ClearML with pytorch 1.7.1, pytorch-lightning 1.2.2 and Tensorboard auto
All ClearML has the latest stable updates. (clearml 1.7.4, clearml-agent 1.7.2)
Is this still happening with the latest clearml ( clearml==1.16.3rc2
) ?
What is the TB version?
I remember a fix regrading lightining support
Also just making sure, are you using the default lightning TB logger ?
How are you initializing the Task.init
(i.e. could you copy here the code?)
I am still having this issue. An update is that the "abort" does not work. Even though the state is correctly tracked in ClearML when I try to abort the experiment through the UI it says it does it but the experiment remains running on the computer.
Just to make sure, did the logging to the clearml server work previously and stoped working at some point?
Is this just the console output while training?
Another thing I notice is that aborting the experiment does not work when this is happening. It just continues to run