Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
I Am Using Clearml Pro And Pretty Regularly I Will Restart An Experiment And Nothing Will Get Logged To Clearml. It Shows The Experiment Running (For Days) And It'S Running Fine On The Pc But No Scalers Or Debug Samples Are Shown. How Do We Troubleshoot T

I am using ClearML Pro and pretty regularly I will restart an experiment and nothing will get logged to ClearML. It shows the experiment running (for days) and it's running fine on the PC but no scalers or debug samples are shown.
How do we troubleshoot this?

  
  
Posted 4 months ago
Votes Newest

Answers 69


Can you share any of the logs?

  
  
Posted 4 months ago

When I try to abort an experiment. I get this in the log

clearml.Task - WARNING - ### TASK STOPPED - USER ABORTED - STATUS CHANGED ###

but it does not stop anything it just continues to run

  
  
Posted 3 months ago

Do you also see the same in the terminal itself on the machine?

  
  
Posted 3 months ago

Yes I see it in the terminal on the machine

  
  
Posted 3 months ago

I do have uncommitted code changes. I can try to check at some point if it would not have the problem without them. It seems like it could be repeated just by making a git repo with that script and adding a very large file. If I can repeat it is it best to open an issue in GitHub?

  
  
Posted 3 months ago

Running clearml_example.py in None reproduces the issue

  
  
Posted 3 months ago

Thank you @<1719524641879363584:profile|ThankfulClams64> for opening the GI, hopefully we will be able to reproduce it and fox ot quickly

  
  
Posted 3 months ago

I created an issue: None

  
  
Posted 3 months ago

Yes it shows on the UI and has the first epoch for some of the metrics but that's it. It has run like 50 epochs, it says it is still running but there are no updates to the scalars or debug samples

  
  
Posted 4 months ago

I'm not sure how to even troubleshoot this.

  
  
Posted 4 months ago

There is clearly some connection to the ClearML server as it remains "running" the entire training session but there are no metrics or debug samples. And I see nothing in the logs to indicate there is an issue

  
  
Posted 4 months ago

I am using 1.15.0. Yes I can try with auto_connect_streams set to True I believe I will still have the issue

  
  
Posted 4 months ago

Hi @<1719524641879363584:profile|ThankfulClams64> , stopping all processes should do that, there is no programmatic way of doing that specifically. Did you try calling task.close() for all tasks you're using?

  
  
Posted 3 months ago

The console logging still works. Aborting the task was in the log but did not work and the process continued until I killed it.

  
  
Posted 3 months ago

So I was able to repeat the same behavior on a machine running this example None

by adding the following callback

class TensorBoardImage(TensorBoard):
    @staticmethod
    def make_image(tensor):
        from PIL import Image
        import io
        tensor = np.stack((tensor, tensor, tensor), axis=2)
        height, width, channels = tensor.shape
        image = Image.fromarray(tensor)
        output = io.BytesIO()
        image.save(output, format='PNG')
        image_string = output.getvalue()
        output.close()
        return tf.Summary.Image(height=height,
                                width=width,
                                colorspace=channels,
                                encoded_image_string=image_string)

    def on_epoch_end(self, epoch, logs=None):
        if logs is None:
            logs = {}
        super(TensorBoardImage, self).on_epoch_end(epoch, logs)
        images = self.validation_data[0]  # 0 - data; 1 - labels
        img = (255 * images[0].reshape(28, 28)).astype('uint8')

        image = self.make_image(img)
        summary = tf.Summary(value=[tf.Summary.Value(tag='image', image=image)])
        self.writer.add_summary(summary, epoch)

So it seems like there is some bug in the how ClearML is logging tensorbaord images that causes everything to fail

  
  
Posted 3 months ago

This was on the same machine I am having issues with it logs scalars correctly using the example code, but when I add in that callback which just logs a random image to tensorboard I don't get any scalars logged

  
  
Posted 3 months ago

sometimes I get no scalars, but the console logging always seems to be working

  
  
Posted 3 months ago

Another thing I notice is that aborting the experiment does not work when this is happening. It just continues to run

  
  
Posted 3 months ago

@<1719524641879363584:profile|ThankfulClams64> , are logs showing up without issue on the 'problematic' machine?

  
  
Posted 3 months ago

I just created a new virtual environment and the problem persists. There are only two dependencies clearml and tensorflow. @<1523701070390366208:profile|CostlyOstrich36> what logs are you referring to?

  
  
Posted 3 months ago

Yes it is logging to the console. The script does hang whenever it completes all the epochs when it is having the issue.

  
  
Posted 3 months ago

If you remove any reference of ClearML from the code on that machine, does it still hang?

  
  
Posted 3 months ago

When the script is hung at the end the experiment says failed in ClearML

  
  
Posted 3 months ago

Not sure if this is helpful but this is what I get when I cntrl-c out of the hung script

^C^CException ignored in atexit callback: <bound method Reporter._handle_program_exit of <clearml.backend_interface.metrics.reporter.Reporter object at 0x70fd8b7ff1c0>>
Event reporting sub-process lost, switching to thread based reporting
Traceback (most recent call last):
  File "/home/richard/.virtualenvs/temp_clearml/lib/python3.10/site-packages/clearml/backend_interface/metrics/reporter.py", line 317, in _handle_program_exit
    self.wait_for_events()
  File "/home/richard/.virtualenvs/temp_clearml/lib/python3.10/site-packages/clearml/backend_interface/metrics/reporter.py", line 337, in wait_for_events
    return report_service.wait_for_events(timeout=timeout)
  File "/home/richard/.virtualenvs/temp_clearml/lib/python3.10/site-packages/clearml/backend_interface/metrics/reporter.py", line 129, in wait_for_events
    if self._empty_state_event.wait(timeout=1.0):
  File "/home/richard/.virtualenvs/temp_clearml/lib/python3.10/site-packages/clearml/utilities/process/mp.py", line 445, in wait
    return self._event.wait(timeout=timeout)
  File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 349, in wait
    self._cond.wait(timeout)
  File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 261, in wait
    return self._wait_semaphore.acquire(True, timeout)
KeyboardInterrupt: 
  
  
Posted 3 months ago

Does any exit code appear? What is the status message and status reason in the 'INFO' section?

  
  
Posted 3 months ago

I found that setting store_uncommitted_code_diff: false instead of true seems to fix the issue

  
  
Posted 3 months ago

What happens if you're running the reporting example from the ClearML github repository?

  
  
Posted 4 months ago

Correct, so I get something like this

ClearML Task: created new task id=6ec57dcb007545aebc4ec51eb5b34c67
======> WARNING! Git diff too large to store (2536kb), skipping uncommitted changes <======
ClearML results page: 

but that is all

  
  
Posted 4 months ago

Can someone help with this?

  
  
Posted 4 months ago

I'll update my clearml version. Unfortunately I do not have a small code snippet and it is not always repeatable. Is there some additional logging that can be turned on?

  
  
Posted 3 months ago
8K Views
69 Answers
4 months ago
3 months ago
Tags