Then we also connect two dictionaries for configs
task.connect(model_config)
task.connect(DataAugConfig)
I am using 1.15.0. Yes I can try with auto_connect_streams set to True I believe I will still have the issue
I found that setting store_uncommitted_code_diff: false
instead of true seems to fix the issue
What happens if you're running the reporting example from the ClearML github repository?
Correct, so I get something like this
ClearML Task: created new task id=6ec57dcb007545aebc4ec51eb5b34c67
======> WARNING! Git diff too large to store (2536kb), skipping uncommitted changes <======
ClearML results page:
but that is all
I'll update my clearml version. Unfortunately I do not have a small code snippet and it is not always repeatable. Is there some additional logging that can be turned on?
Is this just the console output while training?
I will try with clearml==1.16.3rc2 and see if it still has the issue
@<1719524641879363584:profile|ThankfulClams64> , if you set auto_connect_streams to false nothing will be reported from your frameworks. With what frameworks are you working, tensorboard?
Can you try with auto_connect_streams=True ? Also, what version of clearml
sdk are you using?
Hi @<1719524641879363584:profile|ThankfulClams64>
I am using ClearML Pro and pretty regularly I will restart an experiment and nothing will get logged to ClearML.
I use ClearML with pytorch 1.7.1, pytorch-lightning 1.2.2 and Tensorboard auto
All ClearML has the latest stable updates. (clearml 1.7.4, clearml-agent 1.7.2)
Is this still happening with the latest clearml ( clearml==1.16.3rc2
) ?
What is the TB version?
I remember a fix regrading lightining support
Also just making sure, are you using the default lightning TB logger ?
How are you initializing the Task.init
(i.e. could you copy here the code?)
If you remove any reference of ClearML from the code on that machine, does it still hang?
The console logging still works. Aborting the task was in the log but did not work and the process continued until I killed it.
This was on the same machine I am having issues with it logs scalars correctly using the example code, but when I add in that callback which just logs a random image to tensorboard I don't get any scalars logged
Hi @<1719524641879363584:profile|ThankfulClams64> , does the experiment itself show on the ClearML UI?
It is not always reproducible it seems like something that we do not understand happens then the machine consistently has this issue. We believe it has something to do with stopping and starting experiments
When I try to abort an experiment. I get this in the log
clearml.Task - WARNING - ### TASK STOPPED - USER ABORTED - STATUS CHANGED ###
but it does not stop anything it just continues to run
There is clearly some connection to the ClearML server as it remains "running" the entire training session but there are no metrics or debug samples. And I see nothing in the logs to indicate there is an issue
Okay I will do another run to capture the console output. We currently set auto_connect_streams to False to reduce the number of API calls. So there isn't really anything in the ClearML task page console section
That makes sense... If you turn auto_connect_streams to false this mean that auto reporting will be disabled as per the documentation.. If you turn it to True then logging should resume.
Is there someway to kill all connections of a machine to the ClearML server this does seem to be related to restarting a task / running a new task quickly after a task fails or is aborted
STATUS MESSAGE: N/A
STATUS REASON: Signal None
Do you also see the same in the terminal itself on the machine?
Any chance you have some uncommited code changes that, when not included, this works fine?
Yes tensorboard. It is still logging the tensorboard scalers and images. It just doesn't log the console output
Does any exit code appear? What is the status message and status reason in the 'INFO' section?
So I am only seeing values for the first epoch. It seems like it does not track all of them so maybe something is happening when it tries to log scalars.
I have seen it only log iterations but setting task.set_initial_iteration(0)
seemed to fix that so it now seems to be logging the correct epoch
Tensorboard is correct and works. I have never seen an issue in the tensorboard logs
I am still having this issue. An update is that the "abort" does not work. Even though the state is correctly tracked in ClearML when I try to abort the experiment through the UI it says it does it but the experiment remains running on the computer.
I am on 1.16.2
task = Task.init(project_name=model_config['ClearML']['project_name'],
task_name=model_config['ClearML']['task_name'],
continue_last_task=False,
auto_connect_streams=True)