Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
I Am Using Clearml Pro And Pretty Regularly I Will Restart An Experiment And Nothing Will Get Logged To Clearml. It Shows The Experiment Running (For Days) And It'S Running Fine On The Pc But No Scalers Or Debug Samples Are Shown. How Do We Troubleshoot T

I am using ClearML Pro and pretty regularly I will restart an experiment and nothing will get logged to ClearML. It shows the experiment running (for days) and it's running fine on the PC but no scalers or debug samples are shown.
How do we troubleshoot this?

  
  
Posted 4 months ago
Votes Newest

Answers 69


Hi @<1719524641879363584:profile|ThankfulClams64> , does the experiment itself show on the ClearML UI?

  
  
Posted 4 months ago

Yes it shows on the UI and has the first epoch for some of the metrics but that's it. It has run like 50 epochs, it says it is still running but there are no updates to the scalars or debug samples

  
  
Posted 4 months ago

I'm not sure how to even troubleshoot this.

  
  
Posted 4 months ago

Can someone help with this?

  
  
Posted 4 months ago

What happens if you're running the reporting example from the ClearML github repository?

  
  
Posted 4 months ago

The same training works sometimes. But I'm not sure how to troubleshoot when it stops logging the metrics

  
  
Posted 4 months ago

There is clearly some connection to the ClearML server as it remains "running" the entire training session but there are no metrics or debug samples. And I see nothing in the logs to indicate there is an issue

  
  
Posted 4 months ago

Can you share any of the logs?

  
  
Posted 4 months ago

Is this just the console output while training?

  
  
Posted 4 months ago

Console output and also what you get on the ClearML task page under the console section

  
  
Posted 4 months ago

Okay I will do another run to capture the console output. We currently set auto_connect_streams to False to reduce the number of API calls. So there isn't really anything in the ClearML task page console section

  
  
Posted 4 months ago

@<1719524641879363584:profile|ThankfulClams64> , if you set auto_connect_streams to false nothing will be reported from your frameworks. With what frameworks are you working, tensorboard?

  
  
Posted 4 months ago

Yes tensorboard. It is still logging the tensorboard scalers and images. It just doesn't log the console output

  
  
Posted 4 months ago

My bad, if you set auto_connect_streams to false, you basically disable the console logging... Please see the documentation:

auto_connect_streams (Union[bool, Mapping[str, bool]]) – Control the automatic logging of stdout and stderr.
  
  
Posted 4 months ago

Correct, so I get something like this

ClearML Task: created new task id=6ec57dcb007545aebc4ec51eb5b34c67
======> WARNING! Git diff too large to store (2536kb), skipping uncommitted changes <======
ClearML results page: 

but that is all

  
  
Posted 4 months ago

That makes sense... If you turn auto_connect_streams to false this mean that auto reporting will be disabled as per the documentation.. If you turn it to True then logging should resume.

  
  
Posted 4 months ago

Yea I am fine not having the console logging. My issues is the scalers and debug images occasionally don't record to ClearML

  
  
Posted 4 months ago

Can you try with auto_connect_streams=True ? Also, what version of clearml sdk are you using?

  
  
Posted 4 months ago

I am using 1.15.0. Yes I can try with auto_connect_streams set to True I believe I will still have the issue

  
  
Posted 4 months ago

@<1719524641879363584:profile|ThankfulClams64> , can you provide a small code snippet that reproduces this behaviour? Can you also test with the latest version of clearml ?

  
  
Posted 4 months ago

I'll update my clearml version. Unfortunately I do not have a small code snippet and it is not always repeatable. Is there some additional logging that can be turned on?

  
  
Posted 4 months ago

Hi we are currently having the issue. There is nothing in the console regarding ClearML besides

ClearML Task: created new task id=0174d5b9d7164f47bd10484fd268e3ff
======> WARNING! Git diff too large to store (3611kb), skipping uncommitted changes <======
ClearML results page: 

The console logs continue to come in put no scalers or debug images show up.

  
  
Posted 4 months ago

Is there someway to kill all connections of a machine to the ClearML server this does seem to be related to restarting a task / running a new task quickly after a task fails or is aborted

  
  
Posted 4 months ago

Hi @<1719524641879363584:profile|ThankfulClams64> , stopping all processes should do that, there is no programmatic way of doing that specifically. Did you try calling task.close() for all tasks you're using?

  
  
Posted 3 months ago

I am still having this issue. An update is that the "abort" does not work. Even though the state is correctly tracked in ClearML when I try to abort the experiment through the UI it says it does it but the experiment remains running on the computer.

  
  
Posted 3 months ago

We are running the same code on multiple machines and it just randomly happens. Currently we are having the issue on 1 out of 4

  
  
Posted 3 months ago

It seems similar to this None is it possible saving too many model weights causes metric logging thread to die?

  
  
Posted 3 months ago

Hi @<1719524641879363584:profile|ThankfulClams64> ,the logging is done by a separate process, I'm pretty sure it's not terminating all of the sudden. Did you manage to get a full log of such an experiment to share?

  
  
Posted 3 months ago

The console logging still works. Aborting the task was in the log but did not work and the process continued until I killed it.

  
  
Posted 3 months ago

Just to make sure, did the logging to the clearml server work previously and stoped working at some point?

  
  
Posted 3 months ago
8K Views
69 Answers
4 months ago
3 months ago
Tags