Reputation
Badges 1
123 × Eureka!We had a similar problem. Clearml doesnt support data migration (not that I know of)
So you have two ways to fix this:
- Recreate the dataset when its already in Azure
- Edit each elasticsearch database file entry to point to new destination (we did this)
So from our IT guys i now know that
"s3" part of url is subdomain, we use it in all other libs like boto3 and cloudpathlib, never had any problems
This is where the crash happens inside the clearml Task
also, when uploading artifacts, I see where they are stored on the s3 bucket, but I cant find where the debug images are stored at
Can I do it while i have multiple ongoing training?
Bump, still waiting, closing in on a month since we are unable to deploy. We have team of 10+ people
Is it possible to split the large elasticsearch indexes? I know elasticsearch has something called rollover, but im not sure that clearml supports this
I know these keys work, url and everything else works because I use these creds daily
ok, slight update. It seems like artifacts are uploading now to bucket. Maybe my folder explorer used old cache or something.
However, reported images are uploaded to fileserver instead of s3
here is the script im using to test things. Thanks

It looks like im moving forward
Setting url in clearml.conf without "s3" as suggested works (But I dont add port ther, not sure if it breaks something, we dont have a port)
host: " our-host.com "
Then in test_task.py
task: clearml.Task = clearml.Task.init(
project_name="project",
task_name="task",
output_uri=" None ",
)
I think connection is created
What im getting now is bucket error, i suppose I have to specify it so...
@<1523701435869433856:profile|SmugDolphin23> Any ideas how to fix this?
elastisearch also takes like 15GB of ram
good morning, I tried the script you provided and Im getting somewhere
Sounds similar to our issue? We have self hosted S3
None
I was on 1.7 version and now im on latest 1.11
Cant get screenshow yet (copying data), will add later.
What worries me is that config and agent folders are empty. I can reconfigure all agents, no problems.
But where is info about projects stored?
I get the same when I copy /opt/clearml/data folder into /mnt/data/clearml/data
@<1523703436166565888:profile|DeterminedCrab71> Thanks for responding
It was unclear to me that I need to set 443 also everywhere in clearml.conf
Setting s3 host urls with 443 in clearml.conf and also in web UI made it work
Im now almost at the finish line. The last thing that would be great is to fix archived task deletion.
For some reason i have error of missing S3 keys in clearml docker compose logs, the folder / files are not deleted in S3 bucket.
You can see how storage_credentials.co...
i need clearml.conf on my clearml server (in config folder which is mounted in docker-compose) or user PC? Or Both?
Its self hosted S3 thats all I know, i dont think it s Minio
Yes, credetials seems to work
Im trying to figure out not why I dont see the uploaded files / folders
- I checked maybe clearml task uses fileserver instead but i dont see any files in fileserver folder
- Nothing is uploaded in bucket (i will ask IT guy to check if im uploading any files in logs)

ok, is dataset path stored in mongo?
Im unable to find it in elasticsearch (debug images were here)
there is a typing in clearm.conf i sent you on like 87, there should be "key" not "ey" im aware of it



