Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
I Am Using Clearml Pro And Pretty Regularly I Will Restart An Experiment And Nothing Will Get Logged To Clearml. It Shows The Experiment Running (For Days) And It'S Running Fine On The Pc But No Scalers Or Debug Samples Are Shown. How Do We Troubleshoot T

I am using ClearML Pro and pretty regularly I will restart an experiment and nothing will get logged to ClearML. It shows the experiment running (for days) and it's running fine on the PC but no scalers or debug samples are shown.
How do we troubleshoot this?

  
  
Posted one month ago
Votes Newest

Answers 69


This was on the same machine I am having issues with it logs scalars correctly using the example code, but when I add in that callback which just logs a random image to tensorboard I don't get any scalars logged

  
  
Posted one month ago

It is not always reproducible it seems like something that we do not understand happens then the machine consistently has this issue. We believe it has something to do with stopping and starting experiments

  
  
Posted one month ago

Not sure if this is helpful but this is what I get when I cntrl-c out of the hung script

^C^CException ignored in atexit callback: <bound method Reporter._handle_program_exit of <clearml.backend_interface.metrics.reporter.Reporter object at 0x70fd8b7ff1c0>>
Event reporting sub-process lost, switching to thread based reporting
Traceback (most recent call last):
  File "/home/richard/.virtualenvs/temp_clearml/lib/python3.10/site-packages/clearml/backend_interface/metrics/reporter.py", line 317, in _handle_program_exit
    self.wait_for_events()
  File "/home/richard/.virtualenvs/temp_clearml/lib/python3.10/site-packages/clearml/backend_interface/metrics/reporter.py", line 337, in wait_for_events
    return report_service.wait_for_events(timeout=timeout)
  File "/home/richard/.virtualenvs/temp_clearml/lib/python3.10/site-packages/clearml/backend_interface/metrics/reporter.py", line 129, in wait_for_events
    if self._empty_state_event.wait(timeout=1.0):
  File "/home/richard/.virtualenvs/temp_clearml/lib/python3.10/site-packages/clearml/utilities/process/mp.py", line 445, in wait
    return self._event.wait(timeout=timeout)
  File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 349, in wait
    self._cond.wait(timeout)
  File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 261, in wait
    return self._wait_semaphore.acquire(True, timeout)
KeyboardInterrupt: 
  
  
Posted one month ago

@<1719524641879363584:profile|ThankfulClams64> you could try using the compare function in the UI to compare the experiments on the machine the scalars are not reported properly and the experiments on a machine that runs the experiments properly. I suggest then replicating the environment exactly on the problematic machine. None

  
  
Posted one month ago

Thanks @<1719524641879363584:profile|ThankfulClams64> having a code that can reproduce it is exactly what we need.
One thing I might have missed and is very important , what is your tensorboard package version?

  
  
Posted one month ago

The machine currently having the issue is on tensorboard==2.16.2

  
  
Posted one month ago

Correct, so I get something like this

ClearML Task: created new task id=6ec57dcb007545aebc4ec51eb5b34c67
======> WARNING! Git diff too large to store (2536kb), skipping uncommitted changes <======
ClearML results page: 

but that is all

  
  
Posted one month ago

Console logs

  
  
Posted one month ago

Not sure why that is related to saving images

  
  
Posted one month ago

I found that setting store_uncommitted_code_diff: false instead of true seems to fix the issue

  
  
Posted one month ago

Hi @<1719524641879363584:profile|ThankfulClams64> , stopping all processes should do that, there is no programmatic way of doing that specifically. Did you try calling task.close() for all tasks you're using?

  
  
Posted one month ago

So even if you abort it on the start of the experiment it will keep running and reporting logs?

  
  
Posted one month ago

I just created a new virtual environment and the problem persists. There are only two dependencies clearml and tensorflow. @<1523701070390366208:profile|CostlyOstrich36> what logs are you referring to?

  
  
Posted one month ago

So I was able to repeat the same behavior on a machine running this example None

by adding the following callback

class TensorBoardImage(TensorBoard):
    @staticmethod
    def make_image(tensor):
        from PIL import Image
        import io
        tensor = np.stack((tensor, tensor, tensor), axis=2)
        height, width, channels = tensor.shape
        image = Image.fromarray(tensor)
        output = io.BytesIO()
        image.save(output, format='PNG')
        image_string = output.getvalue()
        output.close()
        return tf.Summary.Image(height=height,
                                width=width,
                                colorspace=channels,
                                encoded_image_string=image_string)

    def on_epoch_end(self, epoch, logs=None):
        if logs is None:
            logs = {}
        super(TensorBoardImage, self).on_epoch_end(epoch, logs)
        images = self.validation_data[0]  # 0 - data; 1 - labels
        img = (255 * images[0].reshape(28, 28)).astype('uint8')

        image = self.make_image(img)
        summary = tf.Summary(value=[tf.Summary.Value(tag='image', image=image)])
        self.writer.add_summary(summary, epoch)

So it seems like there is some bug in the how ClearML is logging tensorbaord images that causes everything to fail

  
  
Posted one month ago

We are running the same code on multiple machines and it just randomly happens. Currently we are having the issue on 1 out of 4

  
  
Posted one month ago

Yes it shows on the UI and has the first epoch for some of the metrics but that's it. It has run like 50 epochs, it says it is still running but there are no updates to the scalars or debug samples

  
  
Posted one month ago

Is there someway to kill all connections of a machine to the ClearML server this does seem to be related to restarting a task / running a new task quickly after a task fails or is aborted

  
  
Posted one month ago

Is this just the console output while training?

  
  
Posted one month ago

Console output and also what you get on the ClearML task page under the console section

  
  
Posted one month ago

I'm not sure how to even troubleshoot this.

  
  
Posted one month ago

Yes it is logging to the console. The script does hang whenever it completes all the epochs when it is having the issue.

  
  
Posted one month ago

task.connect(model_config)
task.connect(DataAugConfig)

If these are separate dictionaries , you should probably use two sections:

    task.connect(model_config, name="model config")
    task.connect(DataAugConfig, name="data aug")

It is still getting stuck.
I notice that one of the scalars that gets logged early is logging the epoch while the remaining scalars seem to be iterations because the iteration value is 1355 instead of 26

wait so you are seeing Some scalars ?

while the remaining scalars seem to be iterations because the iteration value is 1355 instead of 26

what are you seeing in your TB?

  
  
Posted one month ago

Hi we are currently having the issue. There is nothing in the console regarding ClearML besides

ClearML Task: created new task id=0174d5b9d7164f47bd10484fd268e3ff
======> WARNING! Git diff too large to store (3611kb), skipping uncommitted changes <======
ClearML results page: 

The console logs continue to come in put no scalers or debug images show up.

  
  
Posted one month ago

It seems similar to this None is it possible saving too many model weights causes metric logging thread to die?

  
  
Posted one month ago

I do have uncommitted code changes. I can try to check at some point if it would not have the problem without them. It seems like it could be repeated just by making a git repo with that script and adding a very large file. If I can repeat it is it best to open an issue in GitHub?

  
  
Posted one month ago

I'm not sure if it still reports logs. But it will continue running on the machine

  
  
Posted one month ago

@<1719524641879363584:profile|ThankfulClams64> , are logs showing up without issue on the 'problematic' machine?

  
  
Posted one month ago

No it completes and exists the script

  
  
Posted one month ago

Thank you @<1719524641879363584:profile|ThankfulClams64> for opening the GI, hopefully we will be able to reproduce it and fox ot quickly

  
  
Posted one month ago

Hi @<1719524641879363584:profile|ThankfulClams64> ! What tensorflow/keras version are you using? I noticed that in the TensorBoardImage you are using tf.Summary which no longer exists since tensorflow 2.2.3 , which I believe is too old to work with tesorboard==2.16.2.
Also, how are you stopping and starting the experiments? When starting an experiment, are you resuming training? In that case, you might want to consider setting the initial iteration to the last iteration your program reported

  
  
Posted one month ago
2K Views
69 Answers
one month ago
one month ago
Tags