digging deeper it seems like a parsing issue
helper is returned as None for some reason
unable to see the images with that link tho
Again, I'm telling you, please look at the documentation and what it says specifically on minio like solutions.
The host should behost: "
our-host.com :<PORT>"
And NOThost: "
s3.our-host.com "
Maybe you don't require a port I don't know your setup, but as I said, in the host settings you need to remove the s3 as this is reserved only to AWS S3.
CostlyOstrich36 Hello, im still unable to understand how to fix this
So from our IT guys i now know that
"s3" part of url is subdomain, we use it in all other libs like boto3 and cloudpathlib, never had any problems
This is where the crash happens inside the clearml Task
2024-02-08 11:23:52,150 - clearml.storage - ERROR - Failed creating storage object
Reason: Missing key and secret for S3 storage access (
)
(edited)
This looks unrelated, to the hotfix, it looks like you misconfigured something and therefor failing to write to s3
CostlyOstrich36 Still unable to understand what im doing wrong.
We have self hosted S3 Ceph storage server
Setting my config like this breaks task.init
Hi, ok im really close now to working system
Debug image is uploading to s3, im seeing the files, all ok there
Problem now is viewing these images in web UI
Going to Debug Samples panel in Task drops me a popup to fill in s3 credentials
I cant figure out what the right setup is for the creds to work
This is what I have now (Note that we dont have region)
clearml.conf is a fresh one i did clearml-init to make sure
btw AmiableSeaturtle81 , can you try to specify the host without http*
and try to set the port to 443? like s3.my _host:443
(or even without the port)
I think that the problem is with missing region definition. You need to set region in the config file.
But it looks like that for the existing version it will not work since there still appears to be a bug related to this. The hotfix is already on the way from my understanding
So, in short, you need to set the region in the config file + wait for the hotfix that is pending for 1.14
Also, is it an AWS S3 or is it some similar storage solution like Minio?
This is unrelated to your routers. There are two things at play here. The configuration of WHERE the data will go - output_uri
and the other is the clearml.conf
that you need to setup with credentials. I am telling you, you are setting it wrong. Please follow documentation.
good morning, I tried the script you provided and Im getting somewhere
AmiableSeaturtle81 weren't you using https for the s3 host? maybe the issue has something to do with that?
As I wrote, you need to remove the s3 from the start of the host section..
I tried it with port, but still having the same issue
Tried it with/without secure and multipart
i need clearml.conf on my clearml server (in config folder which is mounted in docker-compose) or user PC? Or Both?
Its self hosted S3 thats all I know, i dont think it s Minio
We dont need a port
"s3" is part of url that is configured on our routers, without it we cannot connect
I know these keys work, url and everything else works because I use these creds daily
we might as well have "s5" there but it is needed there
Can you actually add the bucket to the credentials just to try it out?
Also, can you check that this snippet works for you (with your creds):
import boto3
import json
import six
key = ""
secret = ""
host = "our_host.com"
bucket_name = "bucket"
profile = None
filename = "test"
data = {"test": "data"}
boto_session = boto3.Session(aws_access_key_id=key, aws_secret_access_key=secret, profile_name=profile)
endpoint = "https://" + host
boto_resource = boto_session.resource("s3", region_name=None, endpoint_url=endpoint)
bucket = boto_resource.Bucket(bucket_name)
bucket.put_object(Key=filename, Body=six.b(json.dumps(data)))
Adding bucket in clearml.conf causes the same error: clearml.storage - ERROR - Failed uploading: Could not connect to the endpoint URL: " None "