This is unrelated to your routers. There are two things at play here. The configuration of WHERE the data will go - output_uri and the other is the clearml.conf that you need to setup with credentials. I am telling you, you are setting it wrong. Please follow documentation.
@<1523701070390366208:profile|CostlyOstrich36> Any news on this? We are currently stuck without this fix, cant finish up clearml setup
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , you need to add the port to the credentials when you input them in the webUI
- This is how web UI configurations looks like

Hi @<1590514584836378624:profile|AmiableSeaturtle81> self hosted S3 service must specify the protocol (http/https) and port, even for the default ones (80 / 443).
The problem is that clearml.conf s3 config doesnt support empty region field, even empty strings crashes it
Setting these urls in SETTINGS/Configuration/ WEB APP CLOUD ACCESS in web ui
None doesnt work
None doesnt work
None doesnt work
None doesnt work
None gets replaced to None ://s3.host-our.com:8080 doesnt work
None doesnt work
None doesnt work
In all of these instances The S3 CREDENTIALS popup never dissapears, it will still popup always asking for creds no matter how I try to set the creds
We dont need a port
"s3" is part of url that is configured on our routers, without it we cannot connect
- Here is how client side clearml.conf looks like together with the script im using to create the tasks. Uploads seems to work and is fixed thanks to you guys 🙌



Hey, i see that 1.14.2 dropped
I tried it but the issue is still there, maybe the hotfix is in next patch?
Here is the setup so you can reproduce it (we dont have region field)
clearml.conf:s3 {use_credentials_chain: falsecredentials: [{host: " s3.somehost.com "key: "XXXXXXXXXXXXXXXXXXXX"secret: "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"bucket: "rnd-dev"},]}
test.py
task: clearml.Task = clearml.Task.init(project_name="project",task_name="task",output_uri=" None ",)
2024-02-08 11:23:52,150 - clearml.storage - ERROR - Failed creating storage object None Reason: Missing key and secret for S3 storage access ( None )
So from our IT guys i now know that
"s3" part of url is subdomain, we use it in all other libs like boto3 and cloudpathlib, never had any problems
This is where the crash happens inside the clearml Task
also, when uploading artifacts, I see where they are stored on the s3 bucket, but I cant find where the debug images are stored at
digging deeper it seems like a parsing issue
@<1590514584836378624:profile|AmiableSeaturtle81> if you wish for you debug samples to be uploaded to s3 you have 2 options: you either use this function: None
or you can change the api.files_server entry to your s3 bucket in clearml.conf . This way you wouldn't need to call set_default_upload_destination every time you run a new script.
Also, in clearml.conf , you can change sdk.development.default_output_uri such that you don't need to set output_uri="s3://... every time in Task.init
Bump, still waiting, closing in on a month since we are unable to deploy. We have team of 10+ people
Again, I'm telling you, please look at the documentation and what it says specifically on minio like solutions.
The host should behost: " our-host.com :<PORT>"
And NOThost: " s3.our-host.com "
Maybe you don't require a port I don't know your setup, but as I said, in the host settings you need to remove the s3 as this is reserved only to AWS S3.
Meaning that you should configure your host as follows host: "somehost.com:9000"
in the code, the output uri should be with None :<PORT>
I know these keys work, url and everything else works because I use these creds daily
ok, slight update. It seems like artifacts are uploading now to bucket. Maybe my folder explorer used old cache or something.
However, reported images are uploaded to fileserver instead of s3
here is the script im using to test things. Thanks

@<1523701435869433856:profile|SmugDolphin23> Any ideas how to fix this?
good morning, I tried the script you provided and Im getting somewhere
Hi @<1590514584836378624:profile|AmiableSeaturtle81> ! To help us debug this: are you able to simply use the boto3 python package to interact with your cluster?
If so, how does that code look like? This would give us some insight on how the config should actually look like or what changes need to be made.
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , the hotfix should right around the corner 🙂
@<1523703436166565888:profile|DeterminedCrab71> Thanks for responding
It was unclear to me that I need to set 443 also everywhere in clearml.conf
Setting s3 host urls with 443 in clearml.conf and also in web UI made it work
Im now almost at the finish line. The last thing that would be great is to fix archived task deletion.
For some reason i have error of missing S3 keys in clearml docker compose logs, the folder / files are not deleted in S3 bucket.
You can see how storage_credentials.conf looks like for me (first image). The same as for client clearml.conf (with port as you suggested)
I have the storage_credentials.conf mounted inside of async_delete as a volume
I have also confirmed that mounting works and i have the storage_credentials.conf inside of async_delete container config folder.
Maybe im misconfiguring someting?
i need clearml.conf on my clearml server (in config folder which is mounted in docker-compose) or user PC? Or Both?
Its self hosted S3 thats all I know, i dont think it s Minio


