I cant get the conf credentials to work
Specifying it like this gives me:
Exception has occurred: ValueError
Could not get access credentials for ' None ' , check configuration file ~/clearml.conf
WebApp: 1.14.1-451 • Server: 1.14.1-451 • API: 2.28
- is 50GB elastisearch normal? Have you seen it. elsewhere or are we doing something wrong, one thing I think is that we are probably logging too frequently
- Is it possible to somehow clean up this?
We fixed the issue, thanks, had to update everything to latest.
Yes, but does add_external_files makes chunked zips as add_files do?
I need the zipping, chunking to manage millions of files
we use Ceph Storage Cluster, interface to it is the same as S3
I dont get what I have misconfigured.
The only thing I have not added is "region" field in clearml.conf because we literally dont have, its a self hosted cluster.
You can try and replicate this s3 config I have posted earlier.
I purged all docker images and it still doesnt seem right
I see no side panel and it doesnt ask for login name
But it seems like the data is gone, not sure how to get them back
Our datasets are more than 1TB in size and will grow in size (probably 4TB and up), this means we also need 4TB local storage just to upload the dataset back in zipped format. This is not a good solution.
What we can do I guess is do the downloading locally by some chunks of files?
Download locally 100 files, add_to_clearml dataset, repeat
I tried it with port, but still having the same issue
Tried it with/without secure and multipart
- Here is how client side clearml.conf looks like together with the script im using to create the tasks. Uploads seems to work and is fixed thanks to you guys 🙌
We had a similar problem. Clearml doesnt support data migration (not that I know of)
So you have two ways to fix this:
- Recreate the dataset when its already in Azure
- Edit each elasticsearch database file entry to point to new destination (we did this)
@<1523701070390366208:profile|CostlyOstrich36> It it still needed since Eugene thinks there is a bug?
Yes, credetials seems to work
Im trying to figure out not why I dont see the uploaded files / folders
- I checked maybe clearml task uses fileserver instead but i dont see any files in fileserver folder
- Nothing is uploaded in bucket (i will ask IT guy to check if im uploading any files in logs)
The problem is that clearml.conf s3 config doesnt support empty region field, even empty strings crashes it
But there are stil some wierd issues, i cannot see the files uploaded in bucket
Specifying it like this, gets me different error:
Exception has occurred: ValueError
- Insufficient permissions (delete failed) for None
botocore.exceptions.ClientError: An error occurred (IllegalLocationConstraintException) when calling the DeleteObject operation: The me-south-1 location constraint is incompatible for the region specific endpoint this request was sent to.
During handling of the above exception, another exception occurred:
File "/home/ma...
When I look at LinkEntry object, link property is correct, no duplicates. Its relative_path thats duped and also key name in _dataset_link_entries
Im doing all of this because there isnt (or im not aware of) any good way understand what datasets are on workers