@<1523703436166565888:profile|DeterminedCrab71> Thanks for responding
It was unclear to me that I need to set 443 also everywhere in clearml.conf
Setting s3 host urls with 443 in clearml.conf and also in web UI made it work
Im now almost at the finish line. The last thing that would be great is to fix archived task deletion.
For some reason i have error of missing S3 keys in clearml docker compose logs, the folder / files are not deleted in S3 bucket.
You can see how storage_credentials.conf looks like for me (first image). The same as for client clearml.conf (with port as you suggested)
I have the storage_credentials.conf mounted inside of async_delete as a volume
I have also confirmed that mounting works and i have the storage_credentials.conf inside of async_delete container config folder.
Maybe im misconfiguring someting?
Hi @<1590514584836378624:profile|AmiableSeaturtle81> self hosted S3 service must specify the protocol (http/https) and port, even for the default ones (80 / 443).
Setting these urls in SETTINGS/Configuration/ WEB APP CLOUD ACCESS in web ui
None doesnt work
None doesnt work
None doesnt work
None doesnt work
None gets replaced to None ://s3.host-our.com:8080 doesnt work
None doesnt work
None doesnt work
In all of these instances The S3 CREDENTIALS popup never dissapears, it will still popup always asking for creds no matter how I try to set the creds
@<1523701435869433856:profile|SmugDolphin23> Setting it without http is not possible as it auto fills them back in
- This is how web UI configurations looks like
- Here is how client side clearml.conf looks like together with the script im using to create the tasks. Uploads seems to work and is fixed thanks to you guys 🙌
btw @<1590514584836378624:profile|AmiableSeaturtle81> , can you try to specify the host without http*
and try to set the port to 443? like s3.my _host:443
(or even without the port)
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , any non-AWS S3-like storage must have a port in this setup, how did you configure the SDK?
Also the two ways you're showing are the same - the popup will fill in the details in the settings page
@<1523701070390366208:profile|CostlyOstrich36> Hello, im still unable to understand how to fix this
In which ui? Because there are two ways to do it. When clicking on artifacti url there is a popup (but has no way to change host url)
Our s3 host doesnt have port (didnt specify port in clearml.conf anywhere and upload works)
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , you need to add the port to the credentials when you input them in the webUI
@<1523701435869433856:profile|SmugDolphin23> Any news?
Hi @<1590514584836378624:profile|AmiableSeaturtle81> ! We have someone investigating the UI issue (I mainly work on the sdk). They will get back to you once they find something...
@<1523701435869433856:profile|SmugDolphin23> Any ideas how to fix this?
unable to see the images with that link tho
we do, yes. Changing it to https in settings doesnt help
@<1590514584836378624:profile|AmiableSeaturtle81> weren't you using https for the s3 host? maybe the issue has something to do with that?
Hi, ok im really close now to working system
Debug image is uploading to s3, im seeing the files, all ok there
Problem now is viewing these images in web UI
Going to Debug Samples panel in Task drops me a popup to fill in s3 credentials
I cant figure out what the right setup is for the creds to work
This is what I have now (Note that we dont have region)
@<1590514584836378624:profile|AmiableSeaturtle81> if you wish for you debug samples to be uploaded to s3 you have 2 options: you either use this function: None
or you can change the api.files_server
entry to your s3 bucket in clearml.conf
. This way you wouldn't need to call set_default_upload_destination
every time you run a new script.
Also, in clearml.conf
, you can change sdk.development.default_output_uri
such that you don't need to set output_uri="s3://...
every time in Task.init
ok, slight update. It seems like artifacts are uploading now to bucket. Maybe my folder explorer used old cache or something.
However, reported images are uploaded to fileserver instead of s3
here is the script im using to test things. Thanks
Yes, credetials seems to work
Im trying to figure out not why I dont see the uploaded files / folders
- I checked maybe clearml task uses fileserver instead but i dont see any files in fileserver folder
- Nothing is uploaded in bucket (i will ask IT guy to check if im uploading any files in logs)
@<1590514584836378624:profile|AmiableSeaturtle81> ok, I think that your credentials from clearml.conf are actually working now. let's not change them.
Now let's try this simple code:
from clearml import Task
import numpy as np
if __name__ == "__main__":
task = Task.init(task_name="test4", project_name="test4", output_uri="
")
image = np.random.randint(0, 256, size=(500, 1000, 3), dtype=np.uint8)
task.upload_artifact("image", image)
You should change the task_name
and project_name
from test
just in case some object has been created previously
But there are stil some wierd issues, i cannot see the files uploaded in bucket
good morning, I tried the script you provided and Im getting somewhere
you might want to prefix both the host
in the configuration file and the uri in Task.init
/ StorageHelper.get
with s3.
if the script above works if you do that