We dont need a port
"s3" is part of url that is configured on our routers, without it we cannot connect
I cant get the conf credentials to work
Specifying it like this gives me:
Exception has occurred: ValueError
Could not get access credentials for ' None ' , check configuration file ~/clearml.conf
Meaning that you should configure your host as follows host: "somehost.com:9000"
Can you add your full configurations again?
there is a typing in clearm.conf i sent you on like 87, there should be "key" not "ey" im aware of it
in the code, the output uri should be with None :<PORT>
you might want to prefix both the host
in the configuration file and the uri in Task.init
/ StorageHelper.get
with s3.
if the script above works if you do that
good morning, I tried the script you provided and Im getting somewhere
unable to see the images with that link tho
@<1523701070390366208:profile|CostlyOstrich36> Hello, im still unable to understand how to fix this
@<1523701070390366208:profile|CostlyOstrich36> Any news on this? We are currently stuck without this fix, cant finish up clearml setup
Hi, ok im really close now to working system
Debug image is uploading to s3, im seeing the files, all ok there
Problem now is viewing these images in web UI
Going to Debug Samples panel in Task drops me a popup to fill in s3 credentials
I cant figure out what the right setup is for the creds to work
This is what I have now (Note that we dont have region)
In which ui? Because there are two ways to do it. When clicking on artifacti url there is a popup (but has no way to change host url)
Our s3 host doesnt have port (didnt specify port in clearml.conf anywhere and upload works)
Yes, credetials seems to work
Im trying to figure out not why I dont see the uploaded files / folders
- I checked maybe clearml task uses fileserver instead but i dont see any files in fileserver folder
- Nothing is uploaded in bucket (i will ask IT guy to check if im uploading any files in logs)
I tried it with port, but still having the same issue
Tried it with/without secure and multipart
i need clearml.conf on my clearml server (in config folder which is mounted in docker-compose) or user PC? Or Both?
Its self hosted S3 thats all I know, i dont think it s Minio
But there are stil some wierd issues, i cannot see the files uploaded in bucket
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , you need to setup your s3 key/secret in clearml.conf
I suggest following this documentation - None
@<1523701070390366208:profile|CostlyOstrich36> Still unable to understand what im doing wrong.
We have self hosted S3 Ceph storage server
Setting my config like this breaks task.init
It looks like im moving forward
Setting url in clearml.conf without "s3" as suggested works (But I dont add port ther, not sure if it breaks something, we dont have a port)
host: " our-host.com "
Then in test_task.py
task: clearml.Task = clearml.Task.init(
project_name="project",
task_name="task",
output_uri=" None ",
)
I think connection is created
What im getting now is bucket error, i suppose I have to specify it somewhere?
py file:
task: clearml.Task = clearml.Task.init(
project_name="project",
task_name="task",
output_uri=" None ",
)
clearml.conf:
{
# This will apply to all buckets in this host (unless key/value is specifically provided for a given bucket)
host: " our-host.com "
key: "xxx"
secret: "xxx"
multipart: false
secure: true
}
Adding bucket in clearml.conf causes the same error: clearml.storage - ERROR - Failed uploading: Could not connect to the endpoint URL: " None "
I dont have a region. I guess I will wait till tomarrow then?
also, when uploading artifacts, I see where they are stored on the s3 bucket, but I cant find where the debug images are stored at
Also, is it an AWS S3 or is it some similar storage solution like Minio?