also, when uploading artifacts, I see where they are stored on the s3 bucket, but I cant find where the debug images are stored at
we use Ceph Storage Cluster, interface to it is the same as S3
I dont get what I have misconfigured.
The only thing I have not added is "region" field in clearml.conf because we literally dont have, its a self hosted cluster.
You can try and replicate this s3 config I have posted earlier.
Hi AmiableSeaturtle81 ! We have someone investigating the UI issue (I mainly work on the sdk). They will get back to you once they find something...
Hi AmiableSeaturtle81 ! To help us debug this: are you able to simply use the boto3
python package to interact with your cluster?
If so, how does that code look like? This would give us some insight on how the config should actually look like or what changes need to be made.
Hi AmiableSeaturtle81 , the hotfix should right around the corner 🙂
CostlyOstrich36 Hello John, we are still unable to use clearml with our self hosted s3 CEPH instances, is there any update on the hotfix for 1.14?
Bump, still waiting, closing in on a month since we are unable to deploy. We have team of 10+ people
unable to see the images with that link tho
Yes, credetials seems to work
Im trying to figure out not why I dont see the uploaded files / folders
- I checked maybe clearml task uses fileserver instead but i dont see any files in fileserver folder
- Nothing is uploaded in bucket (i will ask IT guy to check if im uploading any files in logs)
Hey, i see that 1.14.2 dropped
I tried it but the issue is still there, maybe the hotfix is in next patch?
Here is the setup so you can reproduce it (we dont have region field)
clearml.conf:s3 {
use_credentials_chain: false
credentials: [
{
host: "
s3.somehost.com "
key: "XXXXXXXXXXXXXXXXXXXX"
secret: "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
bucket: "rnd-dev"
},
]
}
test.py
task: clearml.Task = clearml.Task.init(
project_name="project",
task_name="task",
output_uri="
None ",
)
2024-02-08 11:23:52,150 - clearml.storage - ERROR - Failed creating storage object
None Reason: Missing key and secret for S3 storage access (
None )
In which ui? Because there are two ways to do it. When clicking on artifacti url there is a popup (but has no way to change host url)
Our s3 host doesnt have port (didnt specify port in clearml.conf anywhere and upload works)
clearml.conf is a fresh one i did clearml-init to make sure
you might want to prefix both the host
in the configuration file and the uri in Task.init
/ StorageHelper.get
with s3.
if the script above works if you do that
AmiableSeaturtle81 , please see the section regarding minio in the documentation - None
helper is returned as None for some reason
Setting these urls in SETTINGS/Configuration/ WEB APP CLOUD ACCESS in web ui
None doesnt work
None doesnt work
None doesnt work
None doesnt work
None gets replaced to None ://s3.host-our.com:8080 doesnt work
None doesnt work
None doesnt work
In all of these instances The S3 CREDENTIALS popup never dissapears, it will still popup always asking for creds no matter how I try to set the creds
Adding bucket in clearml.conf causes the same error: clearml.storage - ERROR - Failed uploading: Could not connect to the endpoint URL: " None "
Can you add your full configurations again?
we might as well have "s5" there but it is needed there
maybe someone on your end can try to parse such a config and see if they also have the same problem
i need clearml.conf on my clearml server (in config folder which is mounted in docker-compose) or user PC? Or Both?
Its self hosted S3 thats all I know, i dont think it s Minio
CostlyOstrich36 Any news on this? We are currently stuck without this fix, cant finish up clearml setup
I tried it with port, but still having the same issue
Tried it with/without secure and multipart