Thanks SmallTurkey79 ! That actually solved my issue indeed!! Thanks!
Hi UpsetPanda50 , none AWS s3 solutions are also supported. Please see docs - None
For digitalocean:
host: "(region). digitaloceanspaces.com:443 "
bucket: “(bucket name)”
key: “(key)”
secret: “(secret)”
multipart: false
secure: true
(verify commented out entirely)
So for you - make sure to add your creds that have the right scope (r/w), and try specifying the bucket .
Then in clearml tasks themselves you tell the task using output_uri=“s3://(region).digitaloceanspaces.com:443/clearml/”
(I import this as a constant from a _constants.py file across tasks)
This exact specific combination is what ive been using without an issue, but it took hours of guessing to get to it
Hi SmallTurkey79 could you please share how you did that? Here is how I tried, with 443 and no region, but still not working. The endpoint variable in clearml/storage/helper.py always returns None.
Thanks!
Try removing the region, it might be confusing it
i ran into this recently.
its a small thing but double check the port. should be 443, not 433 as in the docs (typo?) - seems you got this in the screenshot .
no region should be set .
i dont use backblaze but if it helps i can show my digitalocean spaces config . should be comparable .
Great, SmallTurkey79 I will check that and let you know! Appreciate that!
Hi CostlyOstrich36 I tried, but got
2025-01-15 16:35:13,846 - clearml.storage - ERROR - Failed uploading: An error occurred (InvalidAccessKeyId) when calling the PutObject operation: The AWS Access Key Id you provided does not exist in our records.
and I saw that endpoint_url is None. Indeed cfg.host is returning None.
I tried to hardcode the endpoint_url and got
2025-01-15 16:37:01,713 - clearml.storage - ERROR - Failed uploading: An error occurred (405) when calling the PutObject operation: Method Not Allowed
ValueError: Insufficient permissions (delete failed) for None
But I am still not sure if it is the about permissions because it was appearing the same ValueError before.
S3BucketConfig(bucket='mkrs-data', subdir='', host='',...)
in all cases i get that host=''.
Just to make sure, does Backblaze support the boto3 SDK?
Yes, i can get access with the boto3 client
Hi CostlyOstrich36 thanks! Yes, i did that like attached. I tried host: s3.us-west-004.backblazeb2.com , backblazeb2.com , with port :9000, :433.
and the log error is:
2025-01-15 13:01:58,264 - clearml.storage - ERROR - Failed uploading: Could not connect to the endpoint URL: " None "
it is trying to sue amazonaws still. Any hints on that?
Thanks!