So from our IT guys i now know that
"s3" part of url is subdomain, we use it in all other libs like boto3 and cloudpathlib, never had any problems
This is where the crash happens inside the clearml Task
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , any non-AWS S3-like storage must have a port in this setup, how did you configure the SDK?
Also the two ways you're showing are the same - the popup will fill in the details in the settings page
This is unrelated to your routers. There are two things at play here. The configuration of WHERE the data will go - output_uri
and the other is the clearml.conf
that you need to setup with credentials. I am telling you, you are setting it wrong. Please follow documentation.
I dont have a region. I guess I will wait till tomarrow then?
py file:
task: clearml.Task = clearml.Task.init(
project_name="project",
task_name="task",
output_uri=" None ",
)
clearml.conf:
{
# This will apply to all buckets in this host (unless key/value is specifically provided for a given bucket)
host: " our-host.com "
key: "xxx"
secret: "xxx"
multipart: false
secure: true
}
@<1590514584836378624:profile|AmiableSeaturtle81> weren't you using https for the s3 host? maybe the issue has something to do with that?
Hi @<1590514584836378624:profile|AmiableSeaturtle81> ! To help us debug this: are you able to simply use the boto3
python package to interact with your cluster?
If so, how does that code look like? This would give us some insight on how the config should actually look like or what changes need to be made.
It looks like im moving forward
Setting url in clearml.conf without "s3" as suggested works (But I dont add port ther, not sure if it breaks something, we dont have a port)
host: " our-host.com "
Then in test_task.py
task: clearml.Task = clearml.Task.init(
project_name="project",
task_name="task",
output_uri=" None ",
)
I think connection is created
What im getting now is bucket error, i suppose I have to specify it somewhere?
@<1523701435869433856:profile|SmugDolphin23> Setting it without http is not possible as it auto fills them back in
ok, slight update. It seems like artifacts are uploading now to bucket. Maybe my folder explorer used old cache or something.
However, reported images are uploaded to fileserver instead of s3
here is the script im using to test things. Thanks
Bump, still waiting, closing in on a month since we are unable to deploy. We have team of 10+ people
we do, yes. Changing it to https in settings doesnt help
host: "my-minio-host:9000"
The port should be whatever port that is used by your S3 solution
But there are stil some wierd issues, i cannot see the files uploaded in bucket
I tried it with port, but still having the same issue
Tried it with/without secure and multipart
@<1523701070390366208:profile|CostlyOstrich36> Hello, im still unable to understand how to fix this
what about this script (replace with your creds, comment out creds in clearml.conf
for now)
from clearml import Task
from clearml.storage.helper import StorageHelper
task = Task.init("test", "test")
task.setup_aws_upload(
bucket="bucket1",
host="localhost:9000",
key="",
secret="",
profile=None,
secure=True
)
helper = StorageHelper.get("
")
I cant get the conf credentials to work
Specifying it like this gives me:
Exception has occurred: ValueError
Could not get access credentials for ' None ' , check configuration file ~/clearml.conf