Reputation
Badges 1
662 × Eureka!Also I appreciate the time youre taking to answer AgitatedDove14 and CostlyOstrich36 , I know Fridays are not working days in Israel, so thank you π
Ah, uhhhh whatever is in the helm/glue charts. I think itβs the allegroai/clearml-agent-k8s-base
, but since I hadnβt gotten a chance to try it out, itβs hard to say with certainty which would be the best for us π
Also something we are very much interested in (including the logger-based scatter plots etc)
Yes π I want ClearML to load and parse the config before that. But now I'm not even sure those settings in the config are even exposed as environment variables?
So now we need to pass Task.init(deferred_init=0)
because the default Task.init(deferred_init=False)
is wrong
That could be a solution for the regex search; my comment on the pop-up (in the previous reply) was a bit more generic - just that it should potentially include some information on what failed while fetching experiments π
I guess the big question is how can I transfer local environment variables to a new Task
Then the username and password would be visible in the autoscaler task π
But it should work out of the box, as it does work like that out of the box also regardless of ClearML. The user and personal access token are used as is and it propagates down to submodules, since those are simply another git repository.
I've further checks on a different machine and it works as well π€
None, they're unusable for us.
Something like this, SuccessfulKoala55 ?
Open a bash session on the docker ( docker exec -it <docker id> /bin/bash
) Open a mongo shell ( mongo
) Switch to backend db ( use backend
) Get relevant project IDs ( db.project.find({"name": "ClearML Examples"})
and db.project.find({"name": "ClearML - Nvidia Framework Examples/Clara"})
) Remove relevant tasks ( db.task.remove({"project": "<project_id>"})
) Remove project IDs ( db.project.remove({"name": ...})
)
Hi SuccessfulKoala55 !
Could you elaborate on how best to delete these from the database?
SweetBadger76 TimelyPenguin76
We're finally tackling this (since it has kept us back at 1.3.2 even though 1.6.2 is out...), and noticed that now the bucket name is also part of the folder?
So following up from David's latest example:StorageManager.download_folder(remote_url='s3://****-bucket/david/', local_folder='./')
Actually creates a new folder ./****-bucket/david/
and puts it contents there.
EDIT: This is with us using internal MinIO, so I believe ClearML parses that end...
You don't even need to set the CLEARML_WORKER_ID, it will automatically assign one based on the machine's name
AgitatedDove14 The keys are there, and there is no specifically defined user in .gitmodules
:[submodule "xxx"] path = xxx url =
I believe this has to do with how ClearML sets up the git credentials perhaps?
You mean the host is considered the bucket, as I wrote in my earlier message as the root cause?
I'm using some old agent I fear, since our infra person decided to use chart 3.3.0 π
I'll try with the env var too. Do you personally recommend docker over the simple AMI + virtual environment?
More complete log does not add much information -Cloning into '/root/.clearml/venvs-builds/3.10/task_repository/xxx/xxx'... fatal: could not read Username for '
': terminal prompts disabled fatal: clone of '
` ' into submodule path '/root/.clearml/venvs-builds/3.10/task_repository/...
One more thing that may be helpful SweetBadger76 , I've gone ahead and looked into clearml.storage.helper
, and found that at least if I specify the bucket name directly in the aws.s3.credentials
configuration for MinIO, then:In [4]: StorageHelper._s3_configurations.get_config_by_uri('
` ')
Out[4]: S3BucketConfig(bucket='clearml', host='some_ip:9000', key='xxx', secret='xxx', token='', multipart=False, acl='', secure=False, region='', verify=True, use_credentials_chain=False)...
I will! (once our infra guy comes back from holiday and updates the install, for some reason they setup server 1.1.1???)
Meanwhile wondering where I got a random worker from
Ah okay π Was confused by what you quoted haha π
Honestly I wouldn't mind building the image myself, but the glue-k8s setup is missing some documentation so I'm not sure how to proceed
I see! The Hyper Datasets don't really fit our use case - it seems really focused on CNNs and image-based data, but lacking support for database-oriented tabular data.
So for now we mainly work with parquet and CSV files, and I was hoping there'd be an easy way to view those... I'll make a workaround with a "Datasets" project I suppose!
Thanks David! I appreciate that, it would be very nice to have a consistent pattern in this!
There's a specific fig[1].set_title(title)
call.
The results from searching in the "Add Experiment" view (can't resize column widths -> can't see project name ...)