Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Why Is Async_Delete Not Working?

Why is async_delete not working?

  • bucket is not right in logs
  • This is really misleading in web ui, because it says "success" although async_delete failed misserably.
  • Im using latest versions
  • Self hosted cleraml, self hosted s3
    image
    image
  
  
Posted one year ago
Votes Newest

Answers 80


Also, is it an AWS S3 or is it some similar storage solution like Minio?

  
  
Posted one year ago

@<1523701070390366208:profile|CostlyOstrich36> Still unable to understand what im doing wrong.
We have self hosted S3 Ceph storage server
Setting my config like this breaks task.init
image

  
  
Posted one year ago

I dont have a region. I guess I will wait till tomarrow then?

  
  
Posted one year ago

Hi @<1590514584836378624:profile|AmiableSeaturtle81> , the hotfix should right around the corner 🙂

  
  
Posted one year ago

@<1523703436166565888:profile|DeterminedCrab71> Thanks for responding
It was unclear to me that I need to set 443 also everywhere in clearml.conf
Setting s3 host urls with 443 in clearml.conf and also in web UI made it work

Im now almost at the finish line. The last thing that would be great is to fix archived task deletion.
For some reason i have error of missing S3 keys in clearml docker compose logs, the folder / files are not deleted in S3 bucket.

You can see how storage_credentials.conf looks like for me (first image). The same as for client clearml.conf (with port as you suggested)

I have the storage_credentials.conf mounted inside of async_delete as a volume
I have also confirmed that mounting works and i have the storage_credentials.conf inside of async_delete container config folder.
Maybe im misconfiguring someting?
image

  
  
Posted one year ago

Specifying it like this, gets me different error:

Exception has occurred: ValueError

  • Insufficient permissions (delete failed) for None
    botocore.exceptions.ClientError: An error occurred (IllegalLocationConstraintException) when calling the DeleteObject operation: The me-south-1 location constraint is incompatible for the region specific endpoint this request was sent to.

During handling of the above exception, another exception occurred:

File "/home/ma/src/clearml-server/task_test.py", line 10, in <module>
task: clearml.Task = clearml.Task.init(
ValueError: Insufficient permissions (delete failed) for None
image

  
  
Posted one year ago

Setting these urls in SETTINGS/Configuration/ WEB APP CLOUD ACCESS in web ui
None doesnt work
None doesnt work
None doesnt work
None doesnt work
None gets replaced to None ://s3.host-our.com:8080 doesnt work
None doesnt work
None doesnt work

In all of these instances The S3 CREDENTIALS popup never dissapears, it will still popup always asking for creds no matter how I try to set the creds
image

  
  
Posted one year ago

@<1523701435869433856:profile|SmugDolphin23> Any ideas how to fix this?

  
  
Posted one year ago

  • Here is how client side clearml.conf looks like together with the script im using to create the tasks. Uploads seems to work and is fixed thanks to you guys 🙌
    image
    image
    image
  
  
Posted one year ago

I know these keys work, url and everything else works because I use these creds daily

  
  
Posted one year ago

digging deeper it seems like a parsing issue
image

  
  
Posted one year ago

good morning, I tried the script you provided and Im getting somewhere
image

  
  
Posted one year ago

This is unrelated to your routers. There are two things at play here. The configuration of WHERE the data will go - output_uri and the other is the clearml.conf that you need to setup with credentials. I am telling you, you are setting it wrong. Please follow documentation.

  
  
Posted one year ago

Bump, still waiting, closing in on a month since we are unable to deploy. We have team of 10+ people

  
  
Posted one year ago

Again, I'm telling you, please look at the documentation and what it says specifically on minio like solutions.
The host should be
host: " our-host.com :<PORT>"
And NOT
host: " s3.our-host.com "
Maybe you don't require a port I don't know your setup, but as I said, in the host settings you need to remove the s3 as this is reserved only to AWS S3.

  
  
Posted one year ago

there is a typing in clearm.conf i sent you on like 87, there should be "key" not "ey" im aware of it

  
  
Posted one year ago

I cant get the conf credentials to work
Specifying it like this gives me:
Exception has occurred: ValueError
Could not get access credentials for ' None ' , check configuration file ~/clearml.conf
image

  
  
Posted one year ago

But there are stil some wierd issues, i cannot see the files uploaded in bucket

  
  
Posted one year ago

@<1523701070390366208:profile|CostlyOstrich36> Any news on this? We are currently stuck without this fix, cant finish up clearml setup

  
  
Posted one year ago

just append it to None : None in Task.init

  
  
Posted one year ago

will it be appended in clearml?
"s3" is part of domain to the host

  
  
Posted one year ago

Hi @<1590514584836378624:profile|AmiableSeaturtle81> , you need to setup your s3 key/secret in clearml.conf
I suggest following this documentation - None

  
  
Posted one year ago

Hi, ok im really close now to working system
Debug image is uploading to s3, im seeing the files, all ok there

Problem now is viewing these images in web UI
Going to Debug Samples panel in Task drops me a popup to fill in s3 credentials

I cant figure out what the right setup is for the creds to work
This is what I have now (Note that we dont have region)
image

  
  
Posted one year ago

The problem is that clearml.conf s3 config doesnt support empty region field, even empty strings crashes it

  
  
Posted one year ago

Hey, i see that 1.14.2 dropped
I tried it but the issue is still there, maybe the hotfix is in next patch?

Here is the setup so you can reproduce it (we dont have region field)
clearml.conf:
s3 {
use_credentials_chain: false
credentials: [
{
host: " s3.somehost.com "
key: "XXXXXXXXXXXXXXXXXXXX"
secret: "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
bucket: "rnd-dev"
},
]
}

test.py

task: clearml.Task = clearml.Task.init(
project_name="project",
task_name="task",
output_uri=" None ",
)

2024-02-08 11:23:52,150 - clearml.storage - ERROR - Failed creating storage object None Reason: Missing key and secret for S3 storage access ( None )

  
  
Posted one year ago

Hi @<1590514584836378624:profile|AmiableSeaturtle81> ! To help us debug this: are you able to simply use the boto3 python package to interact with your cluster?
If so, how does that code look like? This would give us some insight on how the config should actually look like or what changes need to be made.

  
  
Posted one year ago

So from our IT guys i now know that
"s3" part of url is subdomain, we use it in all other libs like boto3 and cloudpathlib, never had any problems
This is where the crash happens inside the clearml Task
image

  
  
Posted one year ago

We dont need a port
"s3" is part of url that is configured on our routers, without it we cannot connect

  
  
Posted one year ago

Hi @<1590514584836378624:profile|AmiableSeaturtle81> , any non-AWS S3-like storage must have a port in this setup, how did you configure the SDK?
Also the two ways you're showing are the same - the popup will fill in the details in the settings page

  
  
Posted one year ago

btw @<1590514584836378624:profile|AmiableSeaturtle81> , can you try to specify the host without http* and try to set the port to 443? like s3.my _host:443 (or even without the port)

  
  
Posted one year ago