Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Why Is Async_Delete Not Working?

Why is async_delete not working?

  • bucket is not right in logs
  • This is really misleading in web ui, because it says "success" although async_delete failed misserably.
  • Im using latest versions
  • Self hosted cleraml, self hosted s3
    image
    image
  
  
Posted 10 months ago
Votes Newest

Answers 80


We dont need a port
"s3" is part of url that is configured on our routers, without it we cannot connect

  
  
Posted 9 months ago

I cant get the conf credentials to work
Specifying it like this gives me:
Exception has occurred: ValueError
Could not get access credentials for ' None ' , check configuration file ~/clearml.conf
image

  
  
Posted 10 months ago

This is an actual AWS S3 bucket?

  
  
Posted 9 months ago

File is written
image
image

  
  
Posted 9 months ago

Meaning that you should configure your host as follows host: "somehost.com:9000"

  
  
Posted 9 months ago

Can you add your full configurations again?

  
  
Posted 9 months ago

in the code, the output uri should be with None :<PORT>

  
  
Posted 9 months ago

unable to see the images with that link tho

  
  
Posted 9 months ago

@<1523701070390366208:profile|CostlyOstrich36> Hello, im still unable to understand how to fix this

  
  
Posted 8 months ago

@<1523701070390366208:profile|CostlyOstrich36> Any news on this? We are currently stuck without this fix, cant finish up clearml setup

  
  
Posted 10 months ago

just append it to None : None in Task.init

  
  
Posted 9 months ago

Hi, ok im really close now to working system
Debug image is uploading to s3, im seeing the files, all ok there

Problem now is viewing these images in web UI
Going to Debug Samples panel in Task drops me a popup to fill in s3 credentials

I cant figure out what the right setup is for the creds to work
This is what I have now (Note that we dont have region)
image

  
  
Posted 9 months ago

In which ui? Because there are two ways to do it. When clicking on artifacti url there is a popup (but has no way to change host url)
Our s3 host doesnt have port (didnt specify port in clearml.conf anywhere and upload works)
image
image
image

  
  
Posted 8 months ago

Yes, credetials seems to work
Im trying to figure out not why I dont see the uploaded files / folders

  • I checked maybe clearml task uses fileserver instead but i dont see any files in fileserver folder
  • Nothing is uploaded in bucket (i will ask IT guy to check if im uploading any files in logs)
    image
  
  
Posted 9 months ago

I tried it with port, but still having the same issue
Tried it with/without secure and multipart
image
image
image

  
  
Posted 9 months ago

i need clearml.conf on my clearml server (in config folder which is mounted in docker-compose) or user PC? Or Both?
Its self hosted S3 thats all I know, i dont think it s Minio

  
  
Posted 10 months ago

But there are stil some wierd issues, i cannot see the files uploaded in bucket

  
  
Posted 9 months ago

Hi @<1590514584836378624:profile|AmiableSeaturtle81> , you need to setup your s3 key/secret in clearml.conf
I suggest following this documentation - None

  
  
Posted 10 months ago

@<1523701070390366208:profile|CostlyOstrich36> Still unable to understand what im doing wrong.
We have self hosted S3 Ceph storage server
Setting my config like this breaks task.init
image

  
  
Posted 10 months ago

It looks like im moving forward

Setting url in clearml.conf without "s3" as suggested works (But I dont add port ther, not sure if it breaks something, we dont have a port)
host: " our-host.com "

Then in test_task.py
task: clearml.Task = clearml.Task.init(
project_name="project",
task_name="task",
output_uri=" None ",
)

I think connection is created
What im getting now is bucket error, i suppose I have to specify it somewhere?
image

  
  
Posted 9 months ago

it looks like problem is the host field, whenever I add it I get:
2024-01-22 13:27:16,489 - clearml.storage - ERROR - Failed creating storage object None Reason: Missing key and secret for S3 storage access ( None )

  
  
Posted 10 months ago

py file:
task: clearml.Task = clearml.Task.init(
project_name="project",
task_name="task",
output_uri=" None ",
)

clearml.conf:
{
# This will apply to all buckets in this host (unless key/value is specifically provided for a given bucket)
host: " our-host.com "
key: "xxx"
secret: "xxx"
multipart: false
secure: true
}
image

  
  
Posted 9 months ago

Adding bucket in clearml.conf causes the same error: clearml.storage - ERROR - Failed uploading: Could not connect to the endpoint URL: " None "
image
image
image

  
  
Posted 9 months ago

I dont have a region. I guess I will wait till tomarrow then?

  
  
Posted 10 months ago

also, when uploading artifacts, I see where they are stored on the s3 bucket, but I cant find where the debug images are stored at

  
  
Posted 10 months ago

Also, is it an AWS S3 or is it some similar storage solution like Minio?

  
  
Posted 10 months ago

The problem is that clearml.conf s3 config doesnt support empty region field, even empty strings crashes it

  
  
Posted 9 months ago

2024-02-08 11:23:52,150 - clearml.storage - ERROR - Failed creating storage object

Reason: Missing key and secret for S3 storage access (

)

(edited)

This looks unrelated, to the hotfix, it looks like you misconfigured something and therefor failing to write to s3

  
  
Posted 9 months ago

  1. This is how web UI configurations looks like
    image
  
  
Posted 8 months ago

@<1523701070390366208:profile|CostlyOstrich36> Hello John, we are still unable to use clearml with our self hosted s3 CEPH instances, is there any update on the hotfix for 1.14?

  
  
Posted 9 months ago