I'm not sure how to even troubleshoot this.
It seems similar to this None is it possible saving too many model weights causes metric logging thread to die?
My bad, if you set auto_connect_streams to false, you basically disable the console logging... Please see the documentation:
auto_connect_streams (Union[bool, Mapping[str, bool]]) – Control the automatic logging of stdout and stderr.
Another thing I notice is that aborting the experiment does not work when this is happening. It just continues to run
That makes sense... If you turn auto_connect_streams to false this mean that auto reporting will be disabled as per the documentation.. If you turn it to True then logging should resume.
Is this just the console output while training?
So I was able to repeat the same behavior on a machine running this example None
by adding the following callback
class TensorBoardImage(TensorBoard):
@staticmethod
def make_image(tensor):
from PIL import Image
import io
tensor = np.stack((tensor, tensor, tensor), axis=2)
height, width, channels = tensor.shape
image = Image.fromarray(tensor)
output = io.BytesIO()
image.save(output, format='PNG')
image_string = output.getvalue()
output.close()
return tf.Summary.Image(height=height,
width=width,
colorspace=channels,
encoded_image_string=image_string)
def on_epoch_end(self, epoch, logs=None):
if logs is None:
logs = {}
super(TensorBoardImage, self).on_epoch_end(epoch, logs)
images = self.validation_data[0] # 0 - data; 1 - labels
img = (255 * images[0].reshape(28, 28)).astype('uint8')
image = self.make_image(img)
summary = tf.Summary(value=[tf.Summary.Value(tag='image', image=image)])
self.writer.add_summary(summary, epoch)
So it seems like there is some bug in the how ClearML is logging tensorbaord images that causes everything to fail
The same training works sometimes. But I'm not sure how to troubleshoot when it stops logging the metrics
Okay I will do another run to capture the console output. We currently set auto_connect_streams to False to reduce the number of API calls. So there isn't really anything in the ClearML task page console section
We are running the same code on multiple machines and it just randomly happens. Currently we are having the issue on 1 out of 4
Not sure if this is helpful but this is what I get when I cntrl-c out of the hung script
^C^CException ignored in atexit callback: <bound method Reporter._handle_program_exit of <clearml.backend_interface.metrics.reporter.Reporter object at 0x70fd8b7ff1c0>>
Event reporting sub-process lost, switching to thread based reporting
Traceback (most recent call last):
File "/home/richard/.virtualenvs/temp_clearml/lib/python3.10/site-packages/clearml/backend_interface/metrics/reporter.py", line 317, in _handle_program_exit
self.wait_for_events()
File "/home/richard/.virtualenvs/temp_clearml/lib/python3.10/site-packages/clearml/backend_interface/metrics/reporter.py", line 337, in wait_for_events
return report_service.wait_for_events(timeout=timeout)
File "/home/richard/.virtualenvs/temp_clearml/lib/python3.10/site-packages/clearml/backend_interface/metrics/reporter.py", line 129, in wait_for_events
if self._empty_state_event.wait(timeout=1.0):
File "/home/richard/.virtualenvs/temp_clearml/lib/python3.10/site-packages/clearml/utilities/process/mp.py", line 445, in wait
return self._event.wait(timeout=timeout)
File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 349, in wait
self._cond.wait(timeout)
File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 261, in wait
return self._wait_semaphore.acquire(True, timeout)
KeyboardInterrupt:
Running clearml_example.py in None reproduces the issue
If you remove any reference of ClearML from the code on that machine, does it still hang?
Hi ThankfulClams64 ! What tensorflow/keras version are you using? I noticed that in the TensorBoardImage
you are using tf.Summary
which no longer exists since tensorflow 2.2.3
, which I believe is too old to work with tesorboard==2.16.2.
Also, how are you stopping and starting the experiments? When starting an experiment, are you resuming training? In that case, you might want to consider setting the initial iteration to the last iteration your program reported
Yes it is logging to the console. The script does hang whenever it completes all the epochs when it is having the issue.
Yes tensorboard. It is still logging the tensorboard scalers and images. It just doesn't log the console output
Do you also see the same in the terminal itself on the machine?
I just created a new virtual environment and the problem persists. There are only two dependencies clearml and tensorflow. CostlyOstrich36 what logs are you referring to?
Yea I am fine not having the console logging. My issues is the scalers and debug images occasionally don't record to ClearML
Yes I see it in the terminal on the machine
Just to make sure, did the logging to the clearml server work previously and stoped working at some point?
There is clearly some connection to the ClearML server as it remains "running" the entire training session but there are no metrics or debug samples. And I see nothing in the logs to indicate there is an issue
When I try to abort an experiment. I get this in the log
clearml.Task - WARNING - ### TASK STOPPED - USER ABORTED - STATUS CHANGED ###
but it does not stop anything it just continues to run
When the script is hung at the end the experiment says failed in ClearML
Not sure why that is related to saving images
I am on 1.16.2
task = Task.init(project_name=model_config['ClearML']['project_name'],
task_name=model_config['ClearML']['task_name'],
continue_last_task=False,
auto_connect_streams=True)
What happens if you're running the reporting example from the ClearML github repository?
task.connect(model_config)
task.connect(DataAugConfig)
If these are separate dictionaries , you should probably use two sections:
task.connect(model_config, name="model config")
task.connect(DataAugConfig, name="data aug")
It is still getting stuck.
I notice that one of the scalars that gets logged early is logging the epoch while the remaining scalars seem to be iterations because the iteration value is 1355 instead of 26
wait so you are seeing Some scalars ?
while the remaining scalars seem to be iterations because the iteration value is 1355 instead of 26
what are you seeing in your TB?
Thanks ThankfulClams64 having a code that can reproduce it is exactly what we need.
One thing I might have missed and is very important , what is your tensorboard package version?