ok, slight update. It seems like artifacts are uploading now to bucket. Maybe my folder explorer used old cache or something.
However, reported images are uploaded to fileserver instead of s3
here is the script im using to test things. Thanks
we might as well have "s5" there but it is needed there
@<1523701070390366208:profile|CostlyOstrich36> Hello John, we are still unable to use clearml with our self hosted s3 CEPH instances, is there any update on the hotfix for 1.14?
in the code, the output uri should be with None :<PORT>
@<1523701070390366208:profile|CostlyOstrich36> Hello, im still unable to understand how to fix this
- This is how web UI configurations looks like
clearml.conf is a fresh one i did clearml-init to make sure
@<1523701070390366208:profile|CostlyOstrich36> Still unable to understand what im doing wrong.
We have self hosted S3 Ceph storage server
Setting my config like this breaks task.init
@<1523701435869433856:profile|SmugDolphin23> Any news?
we use Ceph Storage Cluster, interface to it is the same as S3
I dont get what I have misconfigured.
The only thing I have not added is "region" field in clearml.conf because we literally dont have, its a self hosted cluster.
You can try and replicate this s3 config I have posted earlier.
unable to see the images with that link tho
btw @<1590514584836378624:profile|AmiableSeaturtle81> , can you try to specify the host without http*
and try to set the port to 443? like s3.my _host:443
(or even without the port)
Yes, credetials seems to work
Im trying to figure out not why I dont see the uploaded files / folders
- I checked maybe clearml task uses fileserver instead but i dont see any files in fileserver folder
- Nothing is uploaded in bucket (i will ask IT guy to check if im uploading any files in logs)
The problem is that clearml.conf s3 config doesnt support empty region field, even empty strings crashes it
also, when uploading artifacts, I see where they are stored on the s3 bucket, but I cant find where the debug images are stored at
Hey, i see that 1.14.2 dropped
I tried it but the issue is still there, maybe the hotfix is in next patch?
Here is the setup so you can reproduce it (we dont have region field)
clearml.conf:s3 {
use_credentials_chain: false
credentials: [
{
host: "
s3.somehost.com "
key: "XXXXXXXXXXXXXXXXXXXX"
secret: "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
bucket: "rnd-dev"
},
]
}
test.py
task: clearml.Task = clearml.Task.init(
project_name="project",
task_name="task",
output_uri="
None ",
)
2024-02-08 11:23:52,150 - clearml.storage - ERROR - Failed creating storage object
None Reason: Missing key and secret for S3 storage access (
None )
But there are stil some wierd issues, i cannot see the files uploaded in bucket
Setting these urls in SETTINGS/Configuration/ WEB APP CLOUD ACCESS in web ui
None doesnt work
None doesnt work
None doesnt work
None doesnt work
None gets replaced to None ://s3.host-our.com:8080 doesnt work
None doesnt work
None doesnt work
In all of these instances The S3 CREDENTIALS popup never dissapears, it will still popup always asking for creds no matter how I try to set the creds
@<1523701435869433856:profile|SmugDolphin23> Any ideas how to fix this?
Again, I'm telling you, please look at the documentation and what it says specifically on minio like solutions.
The host should behost: "
our-host.com :<PORT>"
And NOThost: "
s3.our-host.com "
Maybe you don't require a port I don't know your setup, but as I said, in the host settings you need to remove the s3 as this is reserved only to AWS S3.
there is a typing in clearm.conf i sent you on like 87, there should be "key" not "ey" im aware of it
py file:
task: clearml.Task = clearml.Task.init(
project_name="project",
task_name="task",
output_uri=" None ",
)
clearml.conf:
{
# This will apply to all buckets in this host (unless key/value is specifically provided for a given bucket)
host: " our-host.com "
key: "xxx"
secret: "xxx"
multipart: false
secure: true
}
Can you add your full configurations again?
I think that the problem is with missing region definition. You need to set region in the config file.
But it looks like that for the existing version it will not work since there still appears to be a bug related to this. The hotfix is already on the way from my understanding
So, in short, you need to set the region in the config file + wait for the hotfix that is pending for 1.14
helper is returned as None for some reason