good morning, I tried the script you provided and Im getting somewhere
I know these keys work, url and everything else works because I use these creds daily
we do, yes. Changing it to https in settings doesnt help
@<1523701435869433856:profile|SmugDolphin23> Any news?
@<1523703436166565888:profile|DeterminedCrab71> Thanks for responding
It was unclear to me that I need to set 443 also everywhere in clearml.conf
Setting s3 host urls with 443 in clearml.conf and also in web UI made it work
Im now almost at the finish line. The last thing that would be great is to fix archived task deletion.
For some reason i have error of missing S3 keys in clearml docker compose logs, the folder / files are not deleted in S3 bucket.
You can see how storage_credentials.conf looks like for me (first image). The same as for client clearml.conf (with port as you suggested)
I have the storage_credentials.conf mounted inside of async_delete as a volume
I have also confirmed that mounting works and i have the storage_credentials.conf inside of async_delete container config folder.
Maybe im misconfiguring someting?
The problem is that clearml.conf s3 config doesnt support empty region field, even empty strings crashes it
in the code, the output uri should be with None :<PORT>
Can you add your full configurations again?
- This is how web UI configurations looks like
Setting these urls in SETTINGS/Configuration/ WEB APP CLOUD ACCESS in web ui
None doesnt work
None doesnt work
None doesnt work
None doesnt work
None gets replaced to None ://s3.host-our.com:8080 doesnt work
None doesnt work
None doesnt work
In all of these instances The S3 CREDENTIALS popup never dissapears, it will still popup always asking for creds no matter how I try to set the creds
I dont have a region. I guess I will wait till tomarrow then?
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , you need to add the port to the credentials when you input them in the webUI
But there are stil some wierd issues, i cannot see the files uploaded in bucket
clearml.conf is a fresh one i did clearml-init to make sure
Specifying it like this, gets me different error:
Exception has occurred: ValueError
- Insufficient permissions (delete failed) for None
botocore.exceptions.ClientError: An error occurred (IllegalLocationConstraintException) when calling the DeleteObject operation: The me-south-1 location constraint is incompatible for the region specific endpoint this request was sent to.
During handling of the above exception, another exception occurred:
File "/home/ma/src/clearml-server/task_test.py", line 10, in <module>
task: clearml.Task = clearml.Task.init(
ValueError: Insufficient permissions (delete failed) for None
py file:
task: clearml.Task = clearml.Task.init(
project_name="project",
task_name="task",
output_uri=" None ",
)
clearml.conf:
{
# This will apply to all buckets in this host (unless key/value is specifically provided for a given bucket)
host: " our-host.com "
key: "xxx"
secret: "xxx"
multipart: false
secure: true
}
Hi, ok im really close now to working system
Debug image is uploading to s3, im seeing the files, all ok there
Problem now is viewing these images in web UI
Going to Debug Samples panel in Task drops me a popup to fill in s3 credentials
I cant figure out what the right setup is for the creds to work
This is what I have now (Note that we dont have region)
ok, slight update. It seems like artifacts are uploading now to bucket. Maybe my folder explorer used old cache or something.
However, reported images are uploaded to fileserver instead of s3
here is the script im using to test things. Thanks
We dont need a port
"s3" is part of url that is configured on our routers, without it we cannot connect
- Here is how client side clearml.conf looks like together with the script im using to create the tasks. Uploads seems to work and is fixed thanks to you guys 🙌
@<1590514584836378624:profile|AmiableSeaturtle81> , please see the section regarding minio in the documentation - None
what about this script (replace with your creds, comment out creds in clearml.conf
for now)
from clearml import Task
from clearml.storage.helper import StorageHelper
task = Task.init("test", "test")
task.setup_aws_upload(
bucket="bucket1",
host="localhost:9000",
key="",
secret="",
profile=None,
secure=True
)
helper = StorageHelper.get("
")
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , any non-AWS S3-like storage must have a port in this setup, how did you configure the SDK?
Also the two ways you're showing are the same - the popup will fill in the details in the settings page
@<1523701435869433856:profile|SmugDolphin23> Setting it without http is not possible as it auto fills them back in
host: "my-minio-host:9000"
The port should be whatever port that is used by your S3 solution
Can you actually add the bucket to the credentials just to try it out?
Also, can you check that this snippet works for you (with your creds):
import boto3
import json
import six
key = ""
secret = ""
host = "our_host.com"
bucket_name = "bucket"
profile = None
filename = "test"
data = {"test": "data"}
boto_session = boto3.Session(aws_access_key_id=key, aws_secret_access_key=secret, profile_name=profile)
endpoint = "https://" + host
boto_resource = boto_session.resource("s3", region_name=None, endpoint_url=endpoint)
bucket = boto_resource.Bucket(bucket_name)
bucket.put_object(Key=filename, Body=six.b(json.dumps(data)))
Hi @<1590514584836378624:profile|AmiableSeaturtle81> self hosted S3 service must specify the protocol (http/https) and port, even for the default ones (80 / 443).