If that's the case, wouldn't it apply across the board? This happens in a single task within ray - the other tasks (I have many in a single run) are fine
Another side effect btw is that some of our log files (we add a file handler to the logger) end up at 0 bytes. This specifically happens with Ray and ClearML and does not reproduce locally
We just inherit from logging.Handler
and use that in our logging.config.dictConfig
; weird thing is that it still logs most of the tasks, just not the last one?
Some commits related to subprocesses and thread handling 🙂
What do you mean 😄 Using logging.config.dictConfig(...)
Hi UnevenDolphin73 , which clearml version are you using?
I thought so too - so I added flush calls just in case, but nothing's changed.
This is somewhat weird since it always happens in the above scenario (Ray + ClearML), and always in the last task/job from Ray
Well, the thing is ClearML also uses dictConfig, and I think you might be overriding its settings...
I believe it is maybe a race condition that's tangent to clearml now...
Example configuration -version: 1 disable_existing_loggers: true formatters: simple: format: '%(asctime)s %(levelname)-9s %(name)-24s: %(message)s' filters: brackets: (): ccutils.logger.BracketFilter handlers: console: class: ccmlp.utils.TqdmStreamHandler level: INFO formatter: simple filters: [brackets] loggers: # Set logging levels for specific packages urllib3: level: WARNING matplotlib: level: WARNING botocore: level: WARNING fsspec: level: WARNING s3fs: level: WARNING boto3: level: WARNING s3transfer: level: WARNING git: level: WARNING ray: level: WARNING PIL: level: WARNING root: level: DEBUG handlers: [console]
I'll try with 1.1.5 first, then 1.1.6rc0
Or do you mean the contents of the configuration, probably :face_palm: ... one moment
Might very well be - do you touch other handlers?
SuccessfulKoala55 could this be related to the monkey patching for logging platform? We have our own logging handlers that we use in this case