Reputation
Badges 1
123 × Eureka!So from our IT guys i now know that
"s3" part of url is subdomain, we use it in all other libs like boto3 and cloudpathlib, never had any problems
This is where the crash happens inside the clearml Task
also, when uploading artifacts, I see where they are stored on the s3 bucket, but I cant find where the debug images are stored at
Can I do it while i have multiple ongoing training?
Bump, still waiting, closing in on a month since we are unable to deploy. We have team of 10+ people
Is it possible to split the large elasticsearch indexes? I know elasticsearch has something called rollover, but im not sure that clearml supports this
I know these keys work, url and everything else works because I use these creds daily
ok, slight update. It seems like artifacts are uploading now to bucket. Maybe my folder explorer used old cache or something.
However, reported images are uploaded to fileserver instead of s3
here is the script im using to test things. Thanks

@<1523701435869433856:profile|SmugDolphin23> Any ideas how to fix this?
elastisearch also takes like 15GB of ram
good morning, I tried the script you provided and Im getting somewhere
Sounds similar to our issue? We have self hosted S3
None
I was on 1.7 version and now im on latest 1.11
Cant get screenshow yet (copying data), will add later.
What worries me is that config and agent folders are empty. I can reconfigure all agents, no problems.
But where is info about projects stored?
I get the same when I copy /opt/clearml/data folder into /mnt/data/clearml/data
@<1523703436166565888:profile|DeterminedCrab71> Thanks for responding
It was unclear to me that I need to set 443 also everywhere in clearml.conf
Setting s3 host urls with 443 in clearml.conf and also in web UI made it work
Im now almost at the finish line. The last thing that would be great is to fix archived task deletion.
For some reason i have error of missing S3 keys in clearml docker compose logs, the folder / files are not deleted in S3 bucket.
You can see how storage_credentials.co...
i need clearml.conf on my clearml server (in config folder which is mounted in docker-compose) or user PC? Or Both?
Its self hosted S3 thats all I know, i dont think it s Minio
ok, is dataset path stored in mongo?
Im unable to find it in elasticsearch (debug images were here)
there is a typing in clearm.conf i sent you on like 87, there should be "key" not "ey" im aware of it
maybe someone on your end can try to parse such a config and see if they also have the same problem
I solved the problem.
I had to add tensorboard loggger and pass it to pytorch_lightning trainer logger=logger
Is that normal?



