Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
I Am Using Clearml Pro And Pretty Regularly I Will Restart An Experiment And Nothing Will Get Logged To Clearml. It Shows The Experiment Running (For Days) And It'S Running Fine On The Pc But No Scalers Or Debug Samples Are Shown. How Do We Troubleshoot T

I am using ClearML Pro and pretty regularly I will restart an experiment and nothing will get logged to ClearML. It shows the experiment running (for days) and it's running fine on the PC but no scalers or debug samples are shown.
How do we troubleshoot this?

  
  
Posted 4 months ago
Votes Newest

Answers 69


We are running the same code on multiple machines and it just randomly happens. Currently we are having the issue on 1 out of 4

  
  
Posted 3 months ago

My bad, if you set auto_connect_streams to false, you basically disable the console logging... Please see the documentation:

auto_connect_streams (Union[bool, Mapping[str, bool]]) – Control the automatic logging of stdout and stderr.
  
  
Posted 4 months ago

@<1719524641879363584:profile|ThankfulClams64> you could try using the compare function in the UI to compare the experiments on the machine the scalars are not reported properly and the experiments on a machine that runs the experiments properly. I suggest then replicating the environment exactly on the problematic machine. None

  
  
Posted 3 months ago

@<1719524641879363584:profile|ThankfulClams64> , can you provide a small code snippet that reproduces this behaviour? Can you also test with the latest version of clearml ?

  
  
Posted 3 months ago

It seems similar to this None is it possible saving too many model weights causes metric logging thread to die?

  
  
Posted 3 months ago

That makes sense... If you turn auto_connect_streams to false this mean that auto reporting will be disabled as per the documentation.. If you turn it to True then logging should resume.

  
  
Posted 4 months ago

Yes tensorboard. It is still logging the tensorboard scalers and images. It just doesn't log the console output

  
  
Posted 4 months ago

The same training works sometimes. But I'm not sure how to troubleshoot when it stops logging the metrics

  
  
Posted 4 months ago

It is still getting stuck. I think the issue might have something to do with the iterations versus epochs. I notice that one of the scalars that gets logged early is logging the epoch while the remaining scalars seem to be iterations because the iteration value is 1355 instead of 26

  
  
Posted 3 months ago

I am still having this issue. An update is that the "abort" does not work. Even though the state is correctly tracked in ClearML when I try to abort the experiment through the UI it says it does it but the experiment remains running on the computer.

  
  
Posted 3 months ago

Okay I will do another run to capture the console output. We currently set auto_connect_streams to False to reduce the number of API calls. So there isn't really anything in the ClearML task page console section

  
  
Posted 4 months ago

Is there someway to kill all connections of a machine to the ClearML server this does seem to be related to restarting a task / running a new task quickly after a task fails or is aborted

  
  
Posted 3 months ago

I am on 1.16.2

    task = Task.init(project_name=model_config['ClearML']['project_name'],
                     task_name=model_config['ClearML']['task_name'],
                     continue_last_task=False,
                     auto_connect_streams=True)
  
  
Posted 3 months ago

Hi @<1719524641879363584:profile|ThankfulClams64> ,the logging is done by a separate process, I'm pretty sure it's not terminating all of the sudden. Did you manage to get a full log of such an experiment to share?

  
  
Posted 3 months ago

Thanks @<1719524641879363584:profile|ThankfulClams64> having a code that can reproduce it is exactly what we need.
One thing I might have missed and is very important , what is your tensorboard package version?

  
  
Posted 3 months ago

So even if you abort it on the start of the experiment it will keep running and reporting logs?

  
  
Posted 3 months ago

Any chance you have some uncommited code changes that, when not included, this works fine?

  
  
Posted 3 months ago

Not sure why that is related to saving images

  
  
Posted 3 months ago

It is not always reproducible it seems like something that we do not understand happens then the machine consistently has this issue. We believe it has something to do with stopping and starting experiments

  
  
Posted 3 months ago

STATUS MESSAGE: N/A
STATUS REASON: Signal None

  
  
Posted 3 months ago

Hi @<1719524641879363584:profile|ThankfulClams64>

I am using ClearML Pro and pretty regularly I will restart an experiment and nothing will get logged to ClearML.
I use ClearML with pytorch 1.7.1, pytorch-lightning 1.2.2 and Tensorboard auto
All ClearML has the latest stable updates. (clearml 1.7.4, clearml-agent 1.7.2)

Is this still happening with the latest clearml ( clearml==1.16.3rc2 ) ?
What is the TB version?
I remember a fix regrading lightining support
Also just making sure, are you using the default lightning TB logger ?
How are you initializing the Task.init (i.e. could you copy here the code?)

  
  
Posted 3 months ago

task.connect(model_config)
task.connect(DataAugConfig)

If these are separate dictionaries , you should probably use two sections:

    task.connect(model_config, name="model config")
    task.connect(DataAugConfig, name="data aug")

It is still getting stuck.
I notice that one of the scalars that gets logged early is logging the epoch while the remaining scalars seem to be iterations because the iteration value is 1355 instead of 26

wait so you are seeing Some scalars ?

while the remaining scalars seem to be iterations because the iteration value is 1355 instead of 26

what are you seeing in your TB?

  
  
Posted 3 months ago

So I am only seeing values for the first epoch. It seems like it does not track all of them so maybe something is happening when it tries to log scalars.
I have seen it only log iterations but setting task.set_initial_iteration(0) seemed to fix that so it now seems to be logging the correct epoch
Tensorboard is correct and works. I have never seen an issue in the tensorboard logs

  
  
Posted 3 months ago

Then we also connect two dictionaries for configs

    task.connect(model_config)
    task.connect(DataAugConfig)
  
  
Posted 3 months ago

Hi @<1719524641879363584:profile|ThankfulClams64> , does the experiment itself show on the ClearML UI?

  
  
Posted 4 months ago

Console logs

  
  
Posted 3 months ago

No it completes and exists the script

  
  
Posted 3 months ago

Just to make sure, did the logging to the clearml server work previously and stoped working at some point?

  
  
Posted 3 months ago

Hi we are currently having the issue. There is nothing in the console regarding ClearML besides

ClearML Task: created new task id=0174d5b9d7164f47bd10484fd268e3ff
======> WARNING! Git diff too large to store (3611kb), skipping uncommitted changes <======
ClearML results page: 

The console logs continue to come in put no scalers or debug images show up.

  
  
Posted 3 months ago

Yea I am fine not having the console logging. My issues is the scalers and debug images occasionally don't record to ClearML

  
  
Posted 4 months ago
8K Views
69 Answers
4 months ago
3 months ago
Tags