
Reputation
Badges 1
123 × Eureka!I get the same when I copy /opt/clearml/data folder into /mnt/data/clearml/data
here is also another magic stuff
well, I connected to mongodb manually and it is empty, loaded with just examples
I guess I fucked up something when moving files
When I look at LinkEntry object, link property is correct, no duplicates. Its relative_path thats duped and also key name in _dataset_link_entries
I was on 1.7 version and now im on latest 1.11
Cant get screenshow yet (copying data), will add later.
What worries me is that config and agent folders are empty. I can reconfigure all agents, no problems.
But where is info about projects stored?
Is fileserver folder needed for successful backup?
We had a similar problem. Clearml doesnt support data migration (not that I know of)
So you have two ways to fix this:
- Recreate the dataset when its already in Azure
- Edit each elasticsearch database file entry to point to new destination (we did this)
ok, slight update. It seems like artifacts are uploading now to bucket. Maybe my folder explorer used old cache or something.
However, reported images are uploaded to fileserver instead of s3
here is the script im using to test things. Thanks
@<1523701070390366208:profile|CostlyOstrich36> It it still needed since Eugene thinks there is a bug?
I get sidebars and login on my local PC
But the data isnt loaded
I tried to not edit anything in docker-compose and just paste my data in there. Didnt help
@<1523701070390366208:profile|CostlyOstrich36> Hello John, we are still unable to use clearml with our self hosted s3 CEPH instances, is there any update on the hotfix for 1.14?
I know these keys work, url and everything else works because I use these creds daily
No, i specify where to upload
I see the data on S3 bucket is beeing uploaded. Just the log messages are really confusing
@<1523701070390366208:profile|CostlyOstrich36> Hello, im still unable to understand how to fix this
You can check out boto3 python client (This is what we use to download / upload all S3 stuff), but minio-client probably already uses it under the hood.
We also use aws cli to do some downloading, it is way faster than python.
Regarding pdfs, yes, you have no choice but to preprocess it
@<1523701070390366208:profile|CostlyOstrich36> Still unable to understand what im doing wrong.
We have self hosted S3 Ceph storage server
Setting my config like this breaks task.init
is there any way to see if I even have the data in mongodb?