Hi @<1719524641879363584:profile|ThankfulClams64> ! What tensorflow/keras version are you using? I noticed that in the TensorBoardImage
you are using tf.Summary
which no longer exists since tensorflow 2.2.3
, which I believe is too old to work with tesorboard==2.16.2.
Also, how are you stopping and starting the experiments? When starting an experiment, are you resuming training? In that case, you might want to consider setting the initial iteration to the last iteration your program reported
So I was able to repeat the same behavior on a machine running this example None
by adding the following callback
class TensorBoardImage(TensorBoard):
@staticmethod
def make_image(tensor):
from PIL import Image
import io
tensor = np.stack((tensor, tensor, tensor), axis=2)
height, width, channels = tensor.shape
image = Image.fromarray(tensor)
output = io.BytesIO()
image.save(output, format='PNG')
image_string = output.getvalue()
output.close()
return tf.Summary.Image(height=height,
width=width,
colorspace=channels,
encoded_image_string=image_string)
def on_epoch_end(self, epoch, logs=None):
if logs is None:
logs = {}
super(TensorBoardImage, self).on_epoch_end(epoch, logs)
images = self.validation_data[0] # 0 - data; 1 - labels
img = (255 * images[0].reshape(28, 28)).astype('uint8')
image = self.make_image(img)
summary = tf.Summary(value=[tf.Summary.Value(tag='image', image=image)])
self.writer.add_summary(summary, epoch)
So it seems like there is some bug in the how ClearML is logging tensorbaord images that causes everything to fail
It is not always reproducible it seems like something that we do not understand happens then the machine consistently has this issue. We believe it has something to do with stopping and starting experiments
Any chance you have some uncommited code changes that, when not included, this works fine?
I'll update my clearml version. Unfortunately I do not have a small code snippet and it is not always repeatable. Is there some additional logging that can be turned on?
Thanks @<1719524641879363584:profile|ThankfulClams64> having a code that can reproduce it is exactly what we need.
One thing I might have missed and is very important , what is your tensorboard package version?
Thank you @<1719524641879363584:profile|ThankfulClams64> for opening the GI, hopefully we will be able to reproduce it and fox ot quickly
Hi @<1719524641879363584:profile|ThankfulClams64> , does the experiment itself show on the ClearML UI?
@<1719524641879363584:profile|ThankfulClams64> , if you set auto_connect_streams to false nothing will be reported from your frameworks. With what frameworks are you working, tensorboard?
Okay I will do another run to capture the console output. We currently set auto_connect_streams to False to reduce the number of API calls. So there isn't really anything in the ClearML task page console section
The same training works sometimes. But I'm not sure how to troubleshoot when it stops logging the metrics
Can you try with auto_connect_streams=True ? Also, what version of clearml
sdk are you using?
Yea I am fine not having the console logging. My issues is the scalers and debug images occasionally don't record to ClearML
That makes sense... If you turn auto_connect_streams to false this mean that auto reporting will be disabled as per the documentation.. If you turn it to True then logging should resume.
Correct, so I get something like this
ClearML Task: created new task id=6ec57dcb007545aebc4ec51eb5b34c67
======> WARNING! Git diff too large to store (2536kb), skipping uncommitted changes <======
ClearML results page:
but that is all
Console output and also what you get on the ClearML task page under the console section
Is this just the console output while training?
Yes it is logging to the console. The script does hang whenever it completes all the epochs when it is having the issue.
I found that setting store_uncommitted_code_diff: false
instead of true seems to fix the issue
I'm not sure if it still reports logs. But it will continue running on the machine
When the script is hung at the end the experiment says failed in ClearML
I just created a new virtual environment and the problem persists. There are only two dependencies clearml and tensorflow. @<1523701070390366208:profile|CostlyOstrich36> what logs are you referring to?
Does any exit code appear? What is the status message and status reason in the 'INFO' section?
Another thing I notice is that aborting the experiment does not work when this is happening. It just continues to run
Yes I see it in the terminal on the machine
It was working for me. Anyway I modified the callback. Attached is the script that has the issue for me whenever I add random_image_logger
to the callbacks It only logs some of the scalars for 1 epoch. It then is stuck and never recovers. When I remove random_image_logger
the scalars are correctly logged. Again this only on 1 computer, other computers we have logging work perfectly fine
Do you also see the same in the terminal itself on the machine?
Hi @<1719524641879363584:profile|ThankfulClams64> , stopping all processes should do that, there is no programmatic way of doing that specifically. Did you try calling task.close()
for all tasks you're using?
@<1719524641879363584:profile|ThankfulClams64> you could try using the compare function in the UI to compare the experiments on the machine the scalars are not reported properly and the experiments on a machine that runs the experiments properly. I suggest then replicating the environment exactly on the problematic machine. None