
Reputation
Badges 1
58 × Eureka!Do the metrics not get added from the training? I did not add any metadata data but I assumed you would be able to select metrics from the training that generated the model
Not sure why that is related to saving images
When the script is hung at the end the experiment says failed in ClearML
It looks like it creates a task_repository folder in the virtual environment folder. There is a way to specify your virtual environment folder but I haven't found anyway to specify the git directory
How do you get answers to these types of questions? As far as I can tell model registries is broken, and there is no support through the actual application
I'll update my clearml version. Unfortunately I do not have a small code snippet and it is not always repeatable. Is there some additional logging that can be turned on?
I have file_history_size: 1000
I still get images for following epochs. But sometimes it seems like in the UI it limits the view to 32 images.
Okay I will do another run to capture the console output. We currently set auto_connect_streams to False to reduce the number of API calls. So there isn't really anything in the ClearML task page console section
Not sure if this is helpful but this is what I get when I cntrl-c out of the hung script
^C^CException ignored in atexit callback: <bound method Reporter._handle_program_exit of <clearml.backend_interface.metrics.reporter.Reporter object at 0x70fd8b7ff1c0>>
Event reporting sub-process lost, switching to thread based reporting
Traceback (most recent call last):
File "/home/richard/.virtualenvs/temp_clearml/lib/python3.10/site-packages/clearml/backend_interface/metrics/reporter.py", lin...
Yea I am fine not having the console logging. My issues is the scalers and debug images occasionally don't record to ClearML
Is there someway to kill all connections of a machine to the ClearML server this does seem to be related to restarting a task / running a new task quickly after a task fails or is aborted
I'm not sure if it still reports logs. But it will continue running on the machine
Correct, so I get something like this
ClearML Task: created new task id=6ec57dcb007545aebc4ec51eb5b34c67
======> WARNING! Git diff too large to store (2536kb), skipping uncommitted changes <======
ClearML results page:
but that is all
STATUS MESSAGE: N/A
STATUS REASON: Signal None
I found that setting store_uncommitted_code_diff: false
instead of true seems to fix the issue
I will try with clearml==1.16.3rc2 and see if it still has the issue
The console logging still works. Aborting the task was in the log but did not work and the process continued until I killed it.
I didn't do a very scientific comparison but the # of API calls did decrease substantially by turning off auto_connect_streams
It is probably about 100k API calls per day with 1 experiment running where before it was maybe 300k API calls per day. Still seems like a lot when I only run 20-30 epochs in a day
It seems similar to this None is it possible saving too many model weights causes metric logging thread to die?
I just used CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL
can that be put int he clearml.conf? I didn't see a reference to it in the documentaiton
I am still having this issue. An update is that the "abort" does not work. Even though the state is correctly tracked in ClearML when I try to abort the experiment through the UI it says it does it but the experiment remains running on the computer.
Yea, from all the YouTube videos it is just there with no mention of how to get it. But I don't have it
Is this just the console output while training?
Another thing I notice is that aborting the experiment does not work when this is happening. It just continues to run
I guess I don't understand I am referring to the clearml configuration file on the agent. The only way I have gotten it to consistently work is to just install the environment before hand and set that environment variable. Otherwise it seems clearml is not correctly saving the environment to be able to reproduce it. In my case the issues is installing tensorflow instead of tensorflow[and-cuda] which is what was installed
Then we also connect two dictionaries for configs
task.connect(model_config)
task.connect(DataAugConfig)
It was working for me. Anyway I modified the callback. Attached is the script that has the issue for me whenever I add random_image_logger
to the callbacks It only logs some of the scalars for 1 epoch. It then is stuck and never recovers. When I remove random_image_logger
the scalars are correctly logged. Again this only on 1 computer, other computers we have logging work perfectly fine
Thanks! It looks like I can set
auto_connect_streams = False
in the task init at least to try.
We are using Keras so it is logging progress bars by default, which I think we could turn off. I just wouldn't expect logging text to require so many api calls. Especially since they charge by API calls I assumed it would be better managed.