I know these keys work, url and everything else works because I use these creds daily
ok, slight update. It seems like artifacts are uploading now to bucket. Maybe my folder explorer used old cache or something.
However, reported images are uploaded to fileserver instead of s3
here is the script im using to test things. Thanks

@<1523701435869433856:profile|SmugDolphin23> Any ideas how to fix this?
good morning, I tried the script you provided and Im getting somewhere
Sounds similar to our issue? We have self hosted S3
None
I was on 1.7 version and now im on latest 1.11
Cant get screenshow yet (copying data), will add later.
What worries me is that config and agent folders are empty. I can reconfigure all agents, no problems.
But where is info about projects stored?
I get the same when I copy /opt/clearml/data folder into /mnt/data/clearml/data
@<1523703436166565888:profile|DeterminedCrab71> Thanks for responding
It was unclear to me that I need to set 443 also everywhere in clearml.conf
Setting s3 host urls with 443 in clearml.conf and also in web UI made it work
Im now almost at the finish line. The last thing that would be great is to fix archived task deletion.
For some reason i have error of missing S3 keys in clearml docker compose logs, the folder / files are not deleted in S3 bucket.
You can see how storage_credentials.co...
i need clearml.conf on my clearml server (in config folder which is mounted in docker-compose) or user PC? Or Both?
Its self hosted S3 thats all I know, i dont think it s Minio
ok, is dataset path stored in mongo?
Im unable to find it in elasticsearch (debug images were here)
there is a typing in clearm.conf i sent you on like 87, there should be "key" not "ey" im aware of it
maybe someone on your end can try to parse such a config and see if they also have the same problem
I solved the problem.
I had to add tensorboard loggger and pass it to pytorch_lightning trainer logger=logger
Is that normal?
@<1523701070390366208:profile|CostlyOstrich36> Hello, im still unable to understand how to fix this
I need the zipping, chunking to manage millions of files
@<1523701435869433856:profile|SmugDolphin23> Setting it without http is not possible as it auto fills them back in
ok, then, I have a solution, but it still makes duplicate names
- new_dataset._dataset_link_entries = {} # Cleaning all raw/a.png files
- resize a.png and save it in another location named a_resized.png
- Add back other files i need (excluding raw/a.png), I add them to new_dataset._ dataset_link_entries
- Use add_external_files to include it in dataset. Im also using dataset_path=[a list of relative paths]
What I would expect:
100 Files removed (all a.png)
100 Files added (all a_resized.png)
...
I cant get the conf credentials to work
Specifying it like this gives me:
Exception has occurred: ValueError
Could not get access credentials for ' None ' , check configuration file ~/clearml.conf
I get sidebars and login on my local PC
But the data isnt loaded
I tried to not edit anything in docker-compose and just paste my data in there. Didnt help
But there are stil some wierd issues, i cannot see the files uploaded in bucket
No, i specify where to upload
I see the data on S3 bucket is beeing uploaded. Just the log messages are really confusing
