Sounds like incorrect parsing on ClearML side then, doesn't it? At least, it does not fully support MinIO then
I don't imagine AWS users get a new folder named aws-key-region-xyz-bucket-hostname when they download_folder(...) from an AWS S3 bucket, or do they? 🤔
I... did not, ashamed to admit. The documentation says only boolean values.
So where should I install the latest clearml version? On the client that's running a task, or on the worker machine?
Example configuration -
` version: 1
disable_existing_loggers: true
formatters:
simple:
format: '%(asctime)s %(levelname)-9s %(name)-24s: %(message)s'
filters:
brackets:
(): ccutils.logger.BracketFilter
handlers:
console:
class: ccmlp.utils.TqdmStreamHandler
level: INFO
formatter: simple
filters: [brackets]
loggers: # Set logging levels for specific packages
urllib3:
level: WARNING
matplotlib:
level: WARNING
...
Eek. Is there a way to merge a backup from elastic to current running server?
The screenshot is small since the data is private anyway, but it's enough to see:
"Metric: untitled 00" "plot image" as the image title The attached histogram has a title ("histogram of ...")
That gives us the benefit of creating "local datasets" (confined to the scope of the project, do not appear in Datasets tabs, but appear as normal tasks within the project)
That's probably in the newer ClearML server pages then, I'll have to wait still 😅
I just ran into this too recently. Are you passing these also in the extra_clearml_conf for the autoscaler?
FWIW It’s also listed in other places @<1523704157695905792:profile|VivaciousBadger56> , e.g. None says:
In order to make sure we also automatically upload the model snapshot (instead of saving its local path), we need to pass a storage location for the model files to be uploaded to.
For example, upload all snapshots to an S3 bucket…
Not sure I understand your comment - why not let the user start with an empty comparison page and add them from "Add Experiment" button as well?
Yes exactly, but I guess I could've googled for that 😅
Copy the uncommitted changes captured by ClearML using the UI, write to changes.patch , run git apply changes.patch 👍
I'm trying, let's see; our infra person is away on holidays :X Thanks! Uh, which configuration exactly would you like to see? We're running using the helm charts on K8s, so I don't think I have direct access to the agent configuration/update it separately?
From the log you shared, the task is picked up by the
worker_d1bd92a3b039400cbafc60a7a5b1e52b_4e831c4cbaf64e02925b918e9a3a1cf6_<hostname>:gpu0,1
worker
I can try and target the default one if it helps..?
Ah I see, if the pipeline controller begins in a Task it does not add the tags to it…
PricklyRaven28 That would be my fallback, it would make development much slower (having to build containers with every small change)
Any updates @<1523701087100473344:profile|SuccessfulKoala55> ? 🙂
Hm, I'm not sure I follow 🤔 How does the API server config relate to the file server?
And this is of course strictly with the update to 1.6.3 (or newer) that should support API 2.20
From our IT dept:
Not really, when you launch the instance, the launch has to already be in the right VPC/Subnet. Configuration tools are irrelevant here.
I believe it is maybe a race condition that's tangent to clearml now...
No worries @<1537605940121964544:profile|EnthusiasticShrimp49> ! I made some headway by using Task.create , writing a temporary Python script, and using task.update in a similar way to how pipeline steps are created.
I'll try and create an MVC to reproduce the issue, though I may have strayed from your original suggestion because I need to be able to use classes and not just functions.
Uhhh, but pyproject.toml does not necessarily entail poetry... It's a new Python standard
Now, the original pyhocon does support include statements as you mentioned - https://github.com/chimpler/pyhocon