Reputation
Badges 1
58 × Eureka!I will try with clearml==1.16.3rc2 and see if it still has the issue
Is this just the console output while training?
Yea, from all the YouTube videos it is just there with no mention of how to get it. But I don't have it
STATUS MESSAGE: N/A
STATUS REASON: Signal None
I am still having this issue. An update is that the "abort" does not work. Even though the state is correctly tracked in ClearML when I try to abort the experiment through the UI it says it does it but the experiment remains running on the computer.
Thanks! It looks like I can set
auto_connect_streams = False
in the task init at least to try.
We are using Keras so it is logging progress bars by default, which I think we could turn off. I just wouldn't expect logging text to require so many api calls. Especially since they charge by API calls I assumed it would be better managed.
Yea I am fine not having the console logging. My issues is the scalers and debug images occasionally don't record to ClearML
I guess I don't understand I am referring to the clearml configuration file on the agent. The only way I have gotten it to consistently work is to just install the environment before hand and set that environment variable. Otherwise it seems clearml is not correctly saving the environment to be able to reproduce it. In my case the issues is installing tensorflow instead of tensorflow[and-cuda] which is what was installed
How do you get answers to these types of questions? As far as I can tell model registries is broken, and there is no support through the actual application
It was working for me. Anyway I modified the callback. Attached is the script that has the issue for me whenever I add random_image_logger to the callbacks It only logs some of the scalars for 1 epoch. It then is stuck and never recovers. When I remove random_image_logger the scalars are correctly logged. Again this only on 1 computer, other computers we have logging work perfectly fine
Another thing I notice is that aborting the experiment does not work when this is happening. It just continues to run
I'll update my clearml version. Unfortunately I do not have a small code snippet and it is not always repeatable. Is there some additional logging that can be turned on?
Do the metrics not get added from the training? I did not add any metadata data but I assumed you would be able to select metrics from the training that generated the model
We are running the same code on multiple machines and it just randomly happens. Currently we are having the issue on 1 out of 4
I didn't do a very scientific comparison but the # of API calls did decrease substantially by turning off auto_connect_streams It is probably about 100k API calls per day with 1 experiment running where before it was maybe 300k API calls per day. Still seems like a lot when I only run 20-30 epochs in a day
Not sure why that is related to saving images
Yes I see it in the terminal on the machine
There is clearly some connection to the ClearML server as it remains "running" the entire training session but there are no metrics or debug samples. And I see nothing in the logs to indicate there is an issue
I found that setting store_uncommitted_code_diff: false instead of true seems to fix the issue
Hi we are currently having the issue. There is nothing in the console regarding ClearML besides
ClearML Task: created new task id=0174d5b9d7164f47bd10484fd268e3ff
======> WARNING! Git diff too large to store (3611kb), skipping uncommitted changes <======
ClearML results page:
The console logs continue to come in put no scalers or debug images show up.
Not sure if this is helpful but this is what I get when I cntrl-c out of the hung script
^C^CException ignored in atexit callback: <bound method Reporter._handle_program_exit of <clearml.backend_interface.metrics.reporter.Reporter object at 0x70fd8b7ff1c0>>
Event reporting sub-process lost, switching to thread based reporting
Traceback (most recent call last):
File "/home/richard/.virtualenvs/temp_clearml/lib/python3.10/site-packages/clearml/backend_interface/metrics/reporter.py", lin...
I just created a new virtual environment and the problem persists. There are only two dependencies clearml and tensorflow. @<1523701070390366208:profile|CostlyOstrich36> what logs are you referring to?
Correct, so I get something like this
ClearML Task: created new task id=6ec57dcb007545aebc4ec51eb5b34c67
======> WARNING! Git diff too large to store (2536kb), skipping uncommitted changes <======
ClearML results page:
but that is all
Yes tensorboard. It is still logging the tensorboard scalers and images. It just doesn't log the console output
So I am only seeing values for the first epoch. It seems like it does not track all of them so maybe something is happening when it tries to log scalars.
I have seen it only log iterations but setting task.set_initial_iteration(0) seemed to fix that so it now seems to be logging the correct epoch
Tensorboard is correct and works. I have never seen an issue in the tensorboard logs
When the script is hung at the end the experiment says failed in ClearML
The machine currently having the issue is on tensorboard==2.16.2
sometimes I get no scalars, but the console logging always seems to be working