Correct, so I get something like this
ClearML Task: created new task id=6ec57dcb007545aebc4ec51eb5b34c67
======> WARNING! Git diff too large to store (2536kb), skipping uncommitted changes <======
ClearML results page:
but that is all
Does any exit code appear? What is the status message and status reason in the 'INFO' section?
sometimes I get no scalars, but the console logging always seems to be working
Okay I will do another run to capture the console output. We currently set auto_connect_streams to False to reduce the number of API calls. So there isn't really anything in the ClearML task page console section
Any chance you have some uncommited code changes that, when not included, this works fine?
I am on 1.16.2
task = Task.init(project_name=model_config['ClearML']['project_name'],
task_name=model_config['ClearML']['task_name'],
continue_last_task=False,
auto_connect_streams=True)
I'm not sure if it still reports logs. But it will continue running on the machine
So even if you abort it on the start of the experiment it will keep running and reporting logs?
What happens if you're running the reporting example from the ClearML github repository?
It seems similar to this None is it possible saving too many model weights causes metric logging thread to die?
The console logging still works. Aborting the task was in the log but did not work and the process continued until I killed it.
Console output and also what you get on the ClearML task page under the console section
Not sure why that is related to saving images
If you remove any reference of ClearML from the code on that machine, does it still hang?
The same training works sometimes. But I'm not sure how to troubleshoot when it stops logging the metrics
That makes sense... If you turn auto_connect_streams to false this mean that auto reporting will be disabled as per the documentation.. If you turn it to True then logging should resume.
So I was able to repeat the same behavior on a machine running this example None
by adding the following callback
class TensorBoardImage(TensorBoard):
@staticmethod
def make_image(tensor):
from PIL import Image
import io
tensor = np.stack((tensor, tensor, tensor), axis=2)
height, width, channels = tensor.shape
image = Image.fromarray(tensor)
output = io.BytesIO()
image.save(output, format='PNG')
image_string = output.getvalue()
output.close()
return tf.Summary.Image(height=height,
width=width,
colorspace=channels,
encoded_image_string=image_string)
def on_epoch_end(self, epoch, logs=None):
if logs is None:
logs = {}
super(TensorBoardImage, self).on_epoch_end(epoch, logs)
images = self.validation_data[0] # 0 - data; 1 - labels
img = (255 * images[0].reshape(28, 28)).astype('uint8')
image = self.make_image(img)
summary = tf.Summary(value=[tf.Summary.Value(tag='image', image=image)])
self.writer.add_summary(summary, epoch)
So it seems like there is some bug in the how ClearML is logging tensorbaord images that causes everything to fail
Yes tensorboard. It is still logging the tensorboard scalers and images. It just doesn't log the console output
When I try to abort an experiment. I get this in the log
clearml.Task - WARNING - ### TASK STOPPED - USER ABORTED - STATUS CHANGED ###
but it does not stop anything it just continues to run
Then we also connect two dictionaries for configs
task.connect(model_config)
task.connect(DataAugConfig)
This was on the same machine I am having issues with it logs scalars correctly using the example code, but when I add in that callback which just logs a random image to tensorboard I don't get any scalars logged
Just to make sure, did the logging to the clearml server work previously and stoped working at some point?
STATUS MESSAGE: N/A
STATUS REASON: Signal None
I found that setting store_uncommitted_code_diff: false instead of true seems to fix the issue
Do you also see the same in the terminal itself on the machine?
Hi we are currently having the issue. There is nothing in the console regarding ClearML besides
ClearML Task: created new task id=0174d5b9d7164f47bd10484fd268e3ff
======> WARNING! Git diff too large to store (3611kb), skipping uncommitted changes <======
ClearML results page:
The console logs continue to come in put no scalers or debug images show up.
Another thing I notice is that aborting the experiment does not work when this is happening. It just continues to run
It is not always reproducible it seems like something that we do not understand happens then the machine consistently has this issue. We believe it has something to do with stopping and starting experiments