This was on the same machine I am having issues with it logs scalars correctly using the example code, but when I add in that callback which just logs a random image to tensorboard I don't get any scalars logged
So I was able to repeat the same behavior on a machine running this example None
by adding the following callback
class TensorBoardImage(TensorBoard):
@staticmethod
def make_image(tensor):
from PIL import Image
import io
tensor = np.stack((tensor, tensor, tensor), axis=2)
height, width, channels = tensor.shape
image = Image.fromarray(tensor)
output = io.BytesIO()
image.save(output, format='PNG')
image_string = output.getvalue()
output.close()
return tf.Summary.Image(height=height,
width=width,
colorspace=channels,
encoded_image_string=image_string)
def on_epoch_end(self, epoch, logs=None):
if logs is None:
logs = {}
super(TensorBoardImage, self).on_epoch_end(epoch, logs)
images = self.validation_data[0] # 0 - data; 1 - labels
img = (255 * images[0].reshape(28, 28)).astype('uint8')
image = self.make_image(img)
summary = tf.Summary(value=[tf.Summary.Value(tag='image', image=image)])
self.writer.add_summary(summary, epoch)
So it seems like there is some bug in the how ClearML is logging tensorbaord images that causes everything to fail
So I am only seeing values for the first epoch. It seems like it does not track all of them so maybe something is happening when it tries to log scalars.
I have seen it only log iterations but setting task.set_initial_iteration(0)
seemed to fix that so it now seems to be logging the correct epoch
Tensorboard is correct and works. I have never seen an issue in the tensorboard logs
task.connect(model_config)
task.connect(DataAugConfig)
If these are separate dictionaries , you should probably use two sections:
task.connect(model_config, name="model config")
task.connect(DataAugConfig, name="data aug")
It is still getting stuck.
I notice that one of the scalars that gets logged early is logging the epoch while the remaining scalars seem to be iterations because the iteration value is 1355 instead of 26
wait so you are seeing Some scalars ?
while the remaining scalars seem to be iterations because the iteration value is 1355 instead of 26
what are you seeing in your TB?
It is still getting stuck. I think the issue might have something to do with the iterations versus epochs. I notice that one of the scalars that gets logged early is logging the epoch while the remaining scalars seem to be iterations because the iteration value is 1355 instead of 26
I will try with clearml==1.16.3rc2 and see if it still has the issue
Then we also connect two dictionaries for configs
task.connect(model_config)
task.connect(DataAugConfig)
I am on 1.16.2
task = Task.init(project_name=model_config['ClearML']['project_name'],
task_name=model_config['ClearML']['task_name'],
continue_last_task=False,
auto_connect_streams=True)
Hi @<1719524641879363584:profile|ThankfulClams64>
I am using ClearML Pro and pretty regularly I will restart an experiment and nothing will get logged to ClearML.
I use ClearML with pytorch 1.7.1, pytorch-lightning 1.2.2 and Tensorboard auto
All ClearML has the latest stable updates. (clearml 1.7.4, clearml-agent 1.7.2)
Is this still happening with the latest clearml ( clearml==1.16.3rc2
) ?
What is the TB version?
I remember a fix regrading lightining support
Also just making sure, are you using the default lightning TB logger ?
How are you initializing the Task.init
(i.e. could you copy here the code?)
Just to make sure, did the logging to the clearml server work previously and stoped working at some point?
The console logging still works. Aborting the task was in the log but did not work and the process continued until I killed it.
Hi @<1719524641879363584:profile|ThankfulClams64> ,the logging is done by a separate process, I'm pretty sure it's not terminating all of the sudden. Did you manage to get a full log of such an experiment to share?
It seems similar to this None is it possible saving too many model weights causes metric logging thread to die?
We are running the same code on multiple machines and it just randomly happens. Currently we are having the issue on 1 out of 4
I am still having this issue. An update is that the "abort" does not work. Even though the state is correctly tracked in ClearML when I try to abort the experiment through the UI it says it does it but the experiment remains running on the computer.
Hi @<1719524641879363584:profile|ThankfulClams64> , stopping all processes should do that, there is no programmatic way of doing that specifically. Did you try calling task.close()
for all tasks you're using?
Is there someway to kill all connections of a machine to the ClearML server this does seem to be related to restarting a task / running a new task quickly after a task fails or is aborted
Hi we are currently having the issue. There is nothing in the console regarding ClearML besides
ClearML Task: created new task id=0174d5b9d7164f47bd10484fd268e3ff
======> WARNING! Git diff too large to store (3611kb), skipping uncommitted changes <======
ClearML results page:
The console logs continue to come in put no scalers or debug images show up.
I'll update my clearml version. Unfortunately I do not have a small code snippet and it is not always repeatable. Is there some additional logging that can be turned on?
@<1719524641879363584:profile|ThankfulClams64> , can you provide a small code snippet that reproduces this behaviour? Can you also test with the latest version of clearml
?
I am using 1.15.0. Yes I can try with auto_connect_streams set to True I believe I will still have the issue
Can you try with auto_connect_streams=True ? Also, what version of clearml
sdk are you using?
Yea I am fine not having the console logging. My issues is the scalers and debug images occasionally don't record to ClearML
That makes sense... If you turn auto_connect_streams to false this mean that auto reporting will be disabled as per the documentation.. If you turn it to True then logging should resume.
Correct, so I get something like this
ClearML Task: created new task id=6ec57dcb007545aebc4ec51eb5b34c67
======> WARNING! Git diff too large to store (2536kb), skipping uncommitted changes <======
ClearML results page:
but that is all
My bad, if you set auto_connect_streams to false, you basically disable the console logging... Please see the documentation:
auto_connect_streams (Union[bool, Mapping[str, bool]]) – Control the automatic logging of stdout and stderr.
Yes tensorboard. It is still logging the tensorboard scalers and images. It just doesn't log the console output
@<1719524641879363584:profile|ThankfulClams64> , if you set auto_connect_streams to false nothing will be reported from your frameworks. With what frameworks are you working, tensorboard?
Okay I will do another run to capture the console output. We currently set auto_connect_streams to False to reduce the number of API calls. So there isn't really anything in the ClearML task page console section
Console output and also what you get on the ClearML task page under the console section