Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
I Am Using Clearml Pro And Pretty Regularly I Will Restart An Experiment And Nothing Will Get Logged To Clearml. It Shows The Experiment Running (For Days) And It'S Running Fine On The Pc But No Scalers Or Debug Samples Are Shown. How Do We Troubleshoot T

I am using ClearML Pro and pretty regularly I will restart an experiment and nothing will get logged to ClearML. It shows the experiment running (for days) and it's running fine on the PC but no scalers or debug samples are shown.
How do we troubleshoot this?

  
  
Posted one year ago
Votes Newest

Answers 69


I just created a new virtual environment and the problem persists. There are only two dependencies clearml and tensorflow. @<1523701070390366208:profile|CostlyOstrich36> what logs are you referring to?

  
  
Posted one year ago

Can someone help with this?

  
  
Posted one year ago

We are running the same code on multiple machines and it just randomly happens. Currently we are having the issue on 1 out of 4

  
  
Posted one year ago

So even if you abort it on the start of the experiment it will keep running and reporting logs?

  
  
Posted one year ago

Yes it shows on the UI and has the first epoch for some of the metrics but that's it. It has run like 50 epochs, it says it is still running but there are no updates to the scalars or debug samples

  
  
Posted one year ago

Not sure why that is related to saving images

  
  
Posted one year ago

Yes I see it in the terminal on the machine

  
  
Posted one year ago

@<1719524641879363584:profile|ThankfulClams64> you could try using the compare function in the UI to compare the experiments on the machine the scalars are not reported properly and the experiments on a machine that runs the experiments properly. I suggest then replicating the environment exactly on the problematic machine. None

  
  
Posted one year ago

Running clearml_example.py in None reproduces the issue

  
  
Posted one year ago

When the script is hung at the end the experiment says failed in ClearML

  
  
Posted one year ago

I am using 1.15.0. Yes I can try with auto_connect_streams set to True I believe I will still have the issue

  
  
Posted one year ago

What happens if you're running the reporting example from the ClearML github repository?

  
  
Posted one year ago

Console logs

  
  
Posted one year ago

It is not always reproducible it seems like something that we do not understand happens then the machine consistently has this issue. We believe it has something to do with stopping and starting experiments

  
  
Posted one year ago

Can you try with auto_connect_streams=True ? Also, what version of clearml sdk are you using?

  
  
Posted one year ago

My bad, if you set auto_connect_streams to false, you basically disable the console logging... Please see the documentation:

auto_connect_streams (Union[bool, Mapping[str, bool]]) – Control the automatic logging of stdout and stderr.
  
  
Posted one year ago

@<1719524641879363584:profile|ThankfulClams64> , if you set auto_connect_streams to false nothing will be reported from your frameworks. With what frameworks are you working, tensorboard?

  
  
Posted one year ago

Console output and also what you get on the ClearML task page under the console section

  
  
Posted one year ago

Can you share any of the logs?

  
  
Posted one year ago

I am on 1.16.2

    task = Task.init(project_name=model_config['ClearML']['project_name'],
                     task_name=model_config['ClearML']['task_name'],
                     continue_last_task=False,
                     auto_connect_streams=True)
  
  
Posted one year ago

Hi we are currently having the issue. There is nothing in the console regarding ClearML besides

ClearML Task: created new task id=0174d5b9d7164f47bd10484fd268e3ff
======> WARNING! Git diff too large to store (3611kb), skipping uncommitted changes <======
ClearML results page: 

The console logs continue to come in put no scalers or debug images show up.

  
  
Posted one year ago

The machine currently having the issue is on tensorboard==2.16.2

  
  
Posted one year ago

The same training works sometimes. But I'm not sure how to troubleshoot when it stops logging the metrics

  
  
Posted one year ago

Thanks @<1719524641879363584:profile|ThankfulClams64> having a code that can reproduce it is exactly what we need.
One thing I might have missed and is very important , what is your tensorboard package version?

  
  
Posted one year ago

When I try to abort an experiment. I get this in the log

clearml.Task - WARNING - ### TASK STOPPED - USER ABORTED - STATUS CHANGED ###

but it does not stop anything it just continues to run

  
  
Posted one year ago

It is still getting stuck. I think the issue might have something to do with the iterations versus epochs. I notice that one of the scalars that gets logged early is logging the epoch while the remaining scalars seem to be iterations because the iteration value is 1355 instead of 26

  
  
Posted one year ago

sometimes I get no scalars, but the console logging always seems to be working

  
  
Posted one year ago

Does any exit code appear? What is the status message and status reason in the 'INFO' section?

  
  
Posted one year ago

Do you also see the same in the terminal itself on the machine?

  
  
Posted one year ago

Yes it is logging to the console. The script does hang whenever it completes all the epochs when it is having the issue.

  
  
Posted one year ago
118K Views
69 Answers
one year ago
one year ago
Tags