Some commits related to subprocesses and thread handling 🙂
Another side effect btw is that some of our log files (we add a file handler to the logger) end up at 0 bytes. This specifically happens with Ray and ClearML and does not reproduce locally
I'll try with 1.1.5 first, then 1.1.6rc0
Well, the thing is ClearML also uses dictConfig, and I think you might be overriding its settings...
I thought so too - so I added flush calls just in case, but nothing's changed.
This is somewhat weird since it always happens in the above scenario (Ray + ClearML), and always in the last task/job from Ray
Or do you mean the contents of the configuration, probably :face_palm: ... one moment
Hi UnevenDolphin73 , which clearml version are you using?
What do you mean 😄 Using logging.config.dictConfig(...)
We just inherit from logging.Handler
and use that in our logging.config.dictConfig
; weird thing is that it still logs most of the tasks, just not the last one?
Example configuration -version: 1 disable_existing_loggers: true formatters: simple: format: '%(asctime)s %(levelname)-9s %(name)-24s: %(message)s' filters: brackets: (): ccutils.logger.BracketFilter handlers: console: class: ccmlp.utils.TqdmStreamHandler level: INFO formatter: simple filters: [brackets] loggers: # Set logging levels for specific packages urllib3: level: WARNING matplotlib: level: WARNING botocore: level: WARNING fsspec: level: WARNING s3fs: level: WARNING boto3: level: WARNING s3transfer: level: WARNING git: level: WARNING ray: level: WARNING PIL: level: WARNING root: level: DEBUG handlers: [console]
Might very well be - do you touch other handlers?
SuccessfulKoala55 could this be related to the monkey patching for logging platform? We have our own logging handlers that we use in this case
I believe it is maybe a race condition that's tangent to clearml now...
If that's the case, wouldn't it apply across the board? This happens in a single task within ray - the other tasks (I have many in a single run) are fine