Reputation
Badges 1
53 × Eureka!Yes tensorboard. It is still logging the tensorboard scalers and images. It just doesn't log the console output
The machine currently having the issue is on tensorboard==2.16.2
So I am only seeing values for the first epoch. It seems like it does not track all of them so maybe something is happening when it tries to log scalars.
I have seen it only log iterations but setting task.set_initial_iteration(0)
seemed to fix that so it now seems to be logging the correct epoch
Tensorboard is correct and works. I have never seen an issue in the tensorboard logs
I am on 1.16.2
task = Task.init(project_name=model_config['ClearML']['project_name'],
task_name=model_config['ClearML']['task_name'],
continue_last_task=False,
auto_connect_streams=True)
The console logging still works. Aborting the task was in the log but did not work and the process continued until I killed it.
STATUS MESSAGE: N/A
STATUS REASON: Signal None
I found that setting store_uncommitted_code_diff: false
instead of true seems to fix the issue
Correct, so I get something like this
ClearML Task: created new task id=6ec57dcb007545aebc4ec51eb5b34c67
======> WARNING! Git diff too large to store (2536kb), skipping uncommitted changes <======
ClearML results page:
but that is all
Then we also connect two dictionaries for configs
task.connect(model_config)
task.connect(DataAugConfig)
Yea, from all the YouTube videos it is just there with no mention of how to get it. But I don't have it
Thank you! I think that is all I need to do
It looks like it creates a task_repository folder in the virtual environment folder. There is a way to specify your virtual environment folder but I haven't found anyway to specify the git directory
Another thing I notice is that aborting the experiment does not work when this is happening. It just continues to run
I didn't do a very scientific comparison but the # of API calls did decrease substantially by turning off auto_connect_streams
It is probably about 100k API calls per day with 1 experiment running where before it was maybe 300k API calls per day. Still seems like a lot when I only run 20-30 epochs in a day
Thanks! It looks like I can set
auto_connect_streams = False
in the task init at least to try.
We are using Keras so it is logging progress bars by default, which I think we could turn off. I just wouldn't expect logging text to require so many api calls. Especially since they charge by API calls I assumed it would be better managed.
Will do! It probably won't be until next week. I don't plan on stopping this run to try it but will definitely follow up with my results.
Yea I think if we self-hosted I wouldn't have noticed it at all
It's possible, is there a way to just slow down or turn off the log streaming to see how it affects the API calls?
When I try to abort an experiment. I get this in the log
clearml.Task - WARNING - ### TASK STOPPED - USER ABORTED - STATUS CHANGED ###
but it does not stop anything it just continues to run
I guess I don't understand I am referring to the clearml configuration file on the agent. The only way I have gotten it to consistently work is to just install the environment before hand and set that environment variable. Otherwise it seems clearml is not correctly saving the environment to be able to reproduce it. In my case the issues is installing tensorflow instead of tensorflow[and-cuda] which is what was installed
I just used CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL
can that be put int he clearml.conf? I didn't see a reference to it in the documentaiton
Hi we are currently having the issue. There is nothing in the console regarding ClearML besides
ClearML Task: created new task id=0174d5b9d7164f47bd10484fd268e3ff
======> WARNING! Git diff too large to store (3611kb), skipping uncommitted changes <======
ClearML results page:
The console logs continue to come in put no scalers or debug images show up.
It is not always reproducible it seems like something that we do not understand happens then the machine consistently has this issue. We believe it has something to do with stopping and starting experiments
I just created a new virtual environment and the problem persists. There are only two dependencies clearml and tensorflow. @<1523701070390366208:profile|CostlyOstrich36> what logs are you referring to?
sometimes I get no scalars, but the console logging always seems to be working
It was working for me. Anyway I modified the callback. Attached is the script that has the issue for me whenever I add random_image_logger
to the callbacks It only logs some of the scalars for 1 epoch. It then is stuck and never recovers. When I remove random_image_logger
the scalars are correctly logged. Again this only on 1 computer, other computers we have logging work perfectly fine
Okay I will do another run to capture the console output. We currently set auto_connect_streams to False to reduce the number of API calls. So there isn't really anything in the ClearML task page console section
The same training works sometimes. But I'm not sure how to troubleshoot when it stops logging the metrics
Is this just the console output while training?
Not sure why that is related to saving images
Yea I am fine not having the console logging. My issues is the scalers and debug images occasionally don't record to ClearML