Reputation
Badges 1
662 × Eureka!For example, we have a complicated YAML file with built-in !include
instructions, so we upload all the included files too. This then clogs up the artifacts sidebar, and it would be nice to be able to say "these are all artifacts from this one file, you can collapse it by clicking here"
So kind of the ability to have more artifact types in "Artifacts" tab, other than Other
and OutputModels
, etc
Always great to find a bug! I'll make relevant SDK updates then.
I'm also getting the following warning, I guess it's some ClearML dependency?IPython could not be loaded!
Hm. Is there a simple way to test tasks, one at a time?
The SDK is fine as it is - I'm more looking at the WebUI at this point
TimelyPenguin76 that would have been nice but I'd like to upload files as artifacts (rather than parameters).
AgitatedDove14 I mean like a grouping in the artifact. If I add e.g. foo/bar
to my artifact name, it will be uploaded as foo/bar
.
No it doesn't, the agent has its own clearml.conf file.
I'm not too familiar with clearml on docker, but I do remember there are config options to pass some environment variables to docker.
You can then set your environment variables in any way you'd like before the container starts
Perfect, thanks for the answers Valeriano. These small stuff are missing from the documentation, but I now feel much more confident in setting this up.
If I set the following:"extra_clearml_conf": "sdk.aws.s3.credentials = [\n{\nhost: 'ip:9000'\nkey: 'xxx'\nsecret: 'xxx'\nmultipart: false\nsecure: false\n},\n{\nhost: 'ip2:9000'\nkey: 'xxx'\nsecret: 'xxx'\nmultipart: false\nsecure: false\n}\n]"
I run into a weird furl
error:ValueError: Invalid port '9000''.
I'm aware, but it would be much cleaner to define them in the worker's clearml.conf
and let ClearML expose them locally to running tasks.
EDIT: Also the above is specifically about serving, which is not the target here 🤔 At least not yet 😄
We do not CostlyFox64 , but this is useful for the future 🙂 Thanks!
TimelyPenguin76 I'll have a look, one moment.
We're using 1.1.5 at the moment -- I'll make sure everyone updates to 1.1.6 on Monday.
That solution does not work for us unfortunately -- the .env
is an argument from argparse, and because we cannot attach non-git files to a remote task (again issue #395), we have to first download CLI arguments for remote execution and ensure they exist on the remote agent.
That will come at a later stage
The logs are on the bucket, yes.
The default file server is also set to s3://ip:9000/clearml
Or do you mean the contents of the configuration, probably :face_palm: ... one moment
Without knowing anything, I'm assuming maybe ClearML patches plt.title
and not Axes.set_title
?
Another side effect btw is that some of our log files (we add a file handler to the logger) end up at 0 bytes. This specifically happens with Ray and ClearML and does not reproduce locally
I thought so too - so I added flush calls just in case, but nothing's changed.
This is somewhat weird since it always happens in the above scenario (Ray + ClearML), and always in the last task/job from Ray
Example configuration -
` version: 1
disable_existing_loggers: true
formatters:
simple:
format: '%(asctime)s %(levelname)-9s %(name)-24s: %(message)s'
filters:
brackets:
(): ccutils.logger.BracketFilter
handlers:
console:
class: ccmlp.utils.TqdmStreamHandler
level: INFO
formatter: simple
filters: [brackets]
loggers: # Set logging levels for specific packages
urllib3:
level: WARNING
matplotlib:
level: WARNING
...
I'm guessing that's not on pypi yet?
If that's the case, wouldn't it apply across the board? This happens in a single task within ray - the other tasks (I have many in a single run) are fine
What do you mean 😄 Using logging.config.dictConfig(...)
I believe it is maybe a race condition that's tangent to clearml now...
Removing the PVC is just setting the state to absent AFAIK