what about this script (replace with your creds, comment out creds in clearml.conf for now)
from clearml import Task
from clearml.storage.helper import StorageHelper
task = Task.init("test", "test")
task.setup_aws_upload(
bucket="bucket1",
host="localhost:9000",
key="",
secret="",
profile=None,
secure=True
)
helper = StorageHelper.get("
")
As I wrote, you need to remove the s3 from the start of the host section..
Hi, ok im really close now to working system
Debug image is uploading to s3, im seeing the files, all ok there
Problem now is viewing these images in web UI
Going to Debug Samples panel in Task drops me a popup to fill in s3 credentials
I cant figure out what the right setup is for the creds to work
This is what I have now (Note that we dont have region)
Hi @<1590514584836378624:profile|AmiableSeaturtle81> self hosted S3 service must specify the protocol (http/https) and port, even for the default ones (80 / 443).
also, when uploading artifacts, I see where they are stored on the s3 bucket, but I cant find where the debug images are stored at
Again, I'm telling you, please look at the documentation and what it says specifically on minio like solutions.
The host should behost: " our-host.com :<PORT>"
And NOThost: " s3.our-host.com "
Maybe you don't require a port I don't know your setup, but as I said, in the host settings you need to remove the s3 as this is reserved only to AWS S3.
- This is how web UI configurations looks like

Meaning that you should configure your host as follows host: "somehost.com:9000"
we use Ceph Storage Cluster, interface to it is the same as S3
I dont get what I have misconfigured.
The only thing I have not added is "region" field in clearml.conf because we literally dont have, its a self hosted cluster.
You can try and replicate this s3 config I have posted earlier.
2024-02-08 11:23:52,150 - clearml.storage - ERROR - Failed creating storage object
Reason: Missing key and secret for S3 storage access (
)
(edited)
This looks unrelated, to the hotfix, it looks like you misconfigured something and therefor failing to write to s3
I cant get the conf credentials to work
Specifying it like this gives me:
Exception has occurred: ValueError
Could not get access credentials for ' None ' , check configuration file ~/clearml.conf
@<1523703436166565888:profile|DeterminedCrab71> Thanks for responding
It was unclear to me that I need to set 443 also everywhere in clearml.conf
Setting s3 host urls with 443 in clearml.conf and also in web UI made it work
Im now almost at the finish line. The last thing that would be great is to fix archived task deletion.
For some reason i have error of missing S3 keys in clearml docker compose logs, the folder / files are not deleted in S3 bucket.
You can see how storage_credentials.conf looks like for me (first image). The same as for client clearml.conf (with port as you suggested)
I have the storage_credentials.conf mounted inside of async_delete as a volume
I have also confirmed that mounting works and i have the storage_credentials.conf inside of async_delete container config folder.
Maybe im misconfiguring someting?
@<1523701070390366208:profile|CostlyOstrich36> Any news on this? We are currently stuck without this fix, cant finish up clearml setup
@<1590514584836378624:profile|AmiableSeaturtle81> if you wish for you debug samples to be uploaded to s3 you have 2 options: you either use this function: None
or you can change the api.files_server entry to your s3 bucket in clearml.conf . This way you wouldn't need to call set_default_upload_destination every time you run a new script.
Also, in clearml.conf , you can change sdk.development.default_output_uri such that you don't need to set output_uri="s3://... every time in Task.init
Can you actually add the bucket to the credentials just to try it out?
Also, can you check that this snippet works for you (with your creds):
import boto3
import json
import six
key = ""
secret = ""
host = "our_host.com"
bucket_name = "bucket"
profile = None
filename = "test"
data = {"test": "data"}
boto_session = boto3.Session(aws_access_key_id=key, aws_secret_access_key=secret, profile_name=profile)
endpoint = "https://" + host
boto_resource = boto_session.resource("s3", region_name=None, endpoint_url=endpoint)
bucket = boto_resource.Bucket(bucket_name)
bucket.put_object(Key=filename, Body=six.b(json.dumps(data)))



