Hi @<1590514584836378624:profile|AmiableSeaturtle81> ! We have someone investigating the UI issue (I mainly work on the sdk). They will get back to you once they find something...
Specifying it like this, gets me different error:
Exception has occurred: ValueError
- Insufficient permissions (delete failed) for None
botocore.exceptions.ClientError: An error occurred (IllegalLocationConstraintException) when calling the DeleteObject operation: The me-south-1 location constraint is incompatible for the region specific endpoint this request was sent to.
During handling of the above exception, another exception occurred:
File "/home/ma/src/clearml-server/task_test.py", line 10, in <module>
task: clearml.Task = clearml.Task.init(
ValueError: Insufficient permissions (delete failed) for None
@<1523701070390366208:profile|CostlyOstrich36> Any news on this? We are currently stuck without this fix, cant finish up clearml setup
@<1523703436166565888:profile|DeterminedCrab71> Thanks for responding
It was unclear to me that I need to set 443 also everywhere in clearml.conf
Setting s3 host urls with 443 in clearml.conf and also in web UI made it work
Im now almost at the finish line. The last thing that would be great is to fix archived task deletion.
For some reason i have error of missing S3 keys in clearml docker compose logs, the folder / files are not deleted in S3 bucket.
You can see how storage_credentials.conf looks like for me (first image). The same as for client clearml.conf (with port as you suggested)
I have the storage_credentials.conf mounted inside of async_delete as a volume
I have also confirmed that mounting works and i have the storage_credentials.conf inside of async_delete container config folder.
Maybe im misconfiguring someting?
Hi, ok im really close now to working system
Debug image is uploading to s3, im seeing the files, all ok there
Problem now is viewing these images in web UI
Going to Debug Samples panel in Task drops me a popup to fill in s3 credentials
I cant figure out what the right setup is for the creds to work
This is what I have now (Note that we dont have region)
ok, slight update. It seems like artifacts are uploading now to bucket. Maybe my folder explorer used old cache or something.
However, reported images are uploaded to fileserver instead of s3
here is the script im using to test things. Thanks

@<1523701435869433856:profile|SmugDolphin23> Setting it without http is not possible as it auto fills them back in
@<1523701070390366208:profile|CostlyOstrich36> Hello, im still unable to understand how to fix this
we might as well have "s5" there but it is needed there
host: "my-minio-host:9000"
The port should be whatever port that is used by your S3 solution
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , you need to setup your s3 key/secret in clearml.conf
I suggest following this documentation - None
Meaning that you should configure your host as follows host: "somehost.com:9000"
@<1523701070390366208:profile|CostlyOstrich36> Still unable to understand what im doing wrong.
We have self hosted S3 Ceph storage server
Setting my config like this breaks task.init
@<1590514584836378624:profile|AmiableSeaturtle81> , please see the section regarding minio in the documentation - None
Again, I'm telling you, please look at the documentation and what it says specifically on minio like solutions.
The host should behost: " our-host.com :<PORT>"
And NOThost: " s3.our-host.com "
Maybe you don't require a port I don't know your setup, but as I said, in the host settings you need to remove the s3 as this is reserved only to AWS S3.
2024-02-08 11:23:52,150 - clearml.storage - ERROR - Failed creating storage object
Reason: Missing key and secret for S3 storage access (
)
(edited)
This looks unrelated, to the hotfix, it looks like you misconfigured something and therefor failing to write to s3
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , you need to add the port to the credentials when you input them in the webUI
Also, is it an AWS S3 or is it some similar storage solution like Minio?
Adding bucket in clearml.conf causes the same error: clearml.storage - ERROR - Failed uploading: Could not connect to the endpoint URL: " None "


@<1590514584836378624:profile|AmiableSeaturtle81> if you wish for you debug samples to be uploaded to s3 you have 2 options: you either use this function: None
or you can change the api.files_server entry to your s3 bucket in clearml.conf . This way you wouldn't need to call set_default_upload_destination every time you run a new script.
Also, in clearml.conf , you can change sdk.development.default_output_uri such that you don't need to set output_uri="s3://... every time in Task.init
So from our IT guys i now know that
"s3" part of url is subdomain, we use it in all other libs like boto3 and cloudpathlib, never had any problems
This is where the crash happens inside the clearml Task
We dont need a port
"s3" is part of url that is configured on our routers, without it we cannot connect
@<1590514584836378624:profile|AmiableSeaturtle81> weren't you using https for the s3 host? maybe the issue has something to do with that?
I cant get the conf credentials to work
Specifying it like this gives me:
Exception has occurred: ValueError
Could not get access credentials for ' None ' , check configuration file ~/clearml.conf
Hey, i see that 1.14.2 dropped
I tried it but the issue is still there, maybe the hotfix is in next patch?
Here is the setup so you can reproduce it (we dont have region field)
clearml.conf:s3 {use_credentials_chain: falsecredentials: [{host: " s3.somehost.com "key: "XXXXXXXXXXXXXXXXXXXX"secret: "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"bucket: "rnd-dev"},]}
test.py
task: clearml.Task = clearml.Task.init(project_name="project",task_name="task",output_uri=" None ",)
2024-02-08 11:23:52,150 - clearml.storage - ERROR - Failed creating storage object None Reason: Missing key and secret for S3 storage access ( None )
you might want to prefix both the host in the configuration file and the uri in Task.init / StorageHelper.get with s3. if the script above works if you do that
unable to see the images with that link tho

