btw @<1590514584836378624:profile|AmiableSeaturtle81> , can you try to specify the host without http*
and try to set the port to 443? like s3.my _host:443
(or even without the port)
2024-02-08 11:23:52,150 - clearml.storage - ERROR - Failed creating storage object
Reason: Missing key and secret for S3 storage access (
)
(edited)
This looks unrelated, to the hotfix, it looks like you misconfigured something and therefor failing to write to s3
will it be appended in clearml?
"s3" is part of domain to the host
@<1523701435869433856:profile|SmugDolphin23> Any ideas how to fix this?
helper is returned as None for some reason
I know these keys work, url and everything else works because I use these creds daily
Setting these urls in SETTINGS/Configuration/ WEB APP CLOUD ACCESS in web ui
None doesnt work
None doesnt work
None doesnt work
None doesnt work
None gets replaced to None ://s3.host-our.com:8080 doesnt work
None doesnt work
None doesnt work
In all of these instances The S3 CREDENTIALS popup never dissapears, it will still popup always asking for creds no matter how I try to set the creds
digging deeper it seems like a parsing issue
also, when uploading artifacts, I see where they are stored on the s3 bucket, but I cant find where the debug images are stored at
maybe someone on your end can try to parse such a config and see if they also have the same problem
clearml.conf is a fresh one i did clearml-init to make sure
@<1523701070390366208:profile|CostlyOstrich36> Any news on this? We are currently stuck without this fix, cant finish up clearml setup
I think that the problem is with missing region definition. You need to set region in the config file.
But it looks like that for the existing version it will not work since there still appears to be a bug related to this. The hotfix is already on the way from my understanding
So, in short, you need to set the region in the config file + wait for the hotfix that is pending for 1.14
ok, slight update. It seems like artifacts are uploading now to bucket. Maybe my folder explorer used old cache or something.
However, reported images are uploaded to fileserver instead of s3
here is the script im using to test things. Thanks
host: "my-minio-host:9000"
The port should be whatever port that is used by your S3 solution
So from our IT guys i now know that
"s3" part of url is subdomain, we use it in all other libs like boto3 and cloudpathlib, never had any problems
This is where the crash happens inside the clearml Task
we might as well have "s5" there but it is needed there
Adding bucket in clearml.conf causes the same error: clearml.storage - ERROR - Failed uploading: Could not connect to the endpoint URL: " None "
py file:
task: clearml.Task = clearml.Task.init(
project_name="project",
task_name="task",
output_uri=" None ",
)
clearml.conf:
{
# This will apply to all buckets in this host (unless key/value is specifically provided for a given bucket)
host: " our-host.com "
key: "xxx"
secret: "xxx"
multipart: false
secure: true
}
This is unrelated to your routers. There are two things at play here. The configuration of WHERE the data will go - output_uri
and the other is the clearml.conf
that you need to setup with credentials. I am telling you, you are setting it wrong. Please follow documentation.
Also, is it an AWS S3 or is it some similar storage solution like Minio?
what about this script (replace with your creds, comment out creds in clearml.conf
for now)
from clearml import Task
from clearml.storage.helper import StorageHelper
task = Task.init("test", "test")
task.setup_aws_upload(
bucket="bucket1",
host="localhost:9000",
key="",
secret="",
profile=None,
secure=True
)
helper = StorageHelper.get("
")