host: "my-minio-host:9000"
The port should be whatever port that is used by your S3 solution
@<1523701435869433856:profile|SmugDolphin23> Setting it without http is not possible as it auto fills them back in
we do, yes. Changing it to https in settings doesnt help
I know these keys work, url and everything else works because I use these creds daily
I think that the problem is with missing region definition. You need to set region in the config file.
But it looks like that for the existing version it will not work since there still appears to be a bug related to this. The hotfix is already on the way from my understanding
So, in short, you need to set the region in the config file + wait for the hotfix that is pending for 1.14
@<1590514584836378624:profile|AmiableSeaturtle81> weren't you using https for the s3 host? maybe the issue has something to do with that?
As I wrote, you need to remove the s3 from the start of the host section..
@<1590514584836378624:profile|AmiableSeaturtle81> , please see the section regarding minio in the documentation - None
@<1523701435869433856:profile|SmugDolphin23> Any ideas how to fix this?
Hi @<1590514584836378624:profile|AmiableSeaturtle81> self hosted S3 service must specify the protocol (http/https) and port, even for the default ones (80 / 443).
@<1590514584836378624:profile|AmiableSeaturtle81> if you wish for you debug samples to be uploaded to s3 you have 2 options: you either use this function: None
or you can change the api.files_server
entry to your s3 bucket in clearml.conf
. This way you wouldn't need to call set_default_upload_destination
every time you run a new script.
Also, in clearml.conf
, you can change sdk.development.default_output_uri
such that you don't need to set output_uri="s3://...
every time in Task.init
Hi @<1590514584836378624:profile|AmiableSeaturtle81> ! To help us debug this: are you able to simply use the boto3
python package to interact with your cluster?
If so, how does that code look like? This would give us some insight on how the config should actually look like or what changes need to be made.
we might as well have "s5" there but it is needed there
Again, I'm telling you, please look at the documentation and what it says specifically on minio like solutions.
The host should behost: "
our-host.com :<PORT>"
And NOThost: "
s3.our-host.com "
Maybe you don't require a port I don't know your setup, but as I said, in the host settings you need to remove the s3 as this is reserved only to AWS S3.
Can you actually add the bucket to the credentials just to try it out?
Also, can you check that this snippet works for you (with your creds):
import boto3
import json
import six
key = ""
secret = ""
host = "our_host.com"
bucket_name = "bucket"
profile = None
filename = "test"
data = {"test": "data"}
boto_session = boto3.Session(aws_access_key_id=key, aws_secret_access_key=secret, profile_name=profile)
endpoint = "https://" + host
boto_resource = boto_session.resource("s3", region_name=None, endpoint_url=endpoint)
bucket = boto_resource.Bucket(bucket_name)
bucket.put_object(Key=filename, Body=six.b(json.dumps(data)))
Specifying it like this, gets me different error:
Exception has occurred: ValueError
- Insufficient permissions (delete failed) for None
botocore.exceptions.ClientError: An error occurred (IllegalLocationConstraintException) when calling the DeleteObject operation: The me-south-1 location constraint is incompatible for the region specific endpoint this request was sent to.
During handling of the above exception, another exception occurred:
File "/home/ma/src/clearml-server/task_test.py", line 10, in <module>
task: clearml.Task = clearml.Task.init(
ValueError: Insufficient permissions (delete failed) for None
Yes, credetials seems to work
Im trying to figure out not why I dont see the uploaded files / folders
- I checked maybe clearml task uses fileserver instead but i dont see any files in fileserver folder
- Nothing is uploaded in bucket (i will ask IT guy to check if im uploading any files in logs)
I tried it with port, but still having the same issue
Tried it with/without secure and multipart
@<1523701070390366208:profile|CostlyOstrich36> Hello John, we are still unable to use clearml with our self hosted s3 CEPH instances, is there any update on the hotfix for 1.14?
Hi, ok im really close now to working system
Debug image is uploading to s3, im seeing the files, all ok there
Problem now is viewing these images in web UI
Going to Debug Samples panel in Task drops me a popup to fill in s3 credentials
I cant figure out what the right setup is for the creds to work
This is what I have now (Note that we dont have region)
Adding bucket in clearml.conf causes the same error: clearml.storage - ERROR - Failed uploading: Could not connect to the endpoint URL: " None "
@<1523701070390366208:profile|CostlyOstrich36> Still unable to understand what im doing wrong.
We have self hosted S3 Ceph storage server
Setting my config like this breaks task.init
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , you need to setup your s3 key/secret in clearml.conf
I suggest following this documentation - None
Also, is it an AWS S3 or is it some similar storage solution like Minio?
It looks like im moving forward
Setting url in clearml.conf without "s3" as suggested works (But I dont add port ther, not sure if it breaks something, we dont have a port)
host: " our-host.com "
Then in test_task.py
task: clearml.Task = clearml.Task.init(
project_name="project",
task_name="task",
output_uri=" None ",
)
I think connection is created
What im getting now is bucket error, i suppose I have to specify it somewhere?
In which ui? Because there are two ways to do it. When clicking on artifacti url there is a popup (but has no way to change host url)
Our s3 host doesnt have port (didnt specify port in clearml.conf anywhere and upload works)