Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Why Is Async_Delete Not Working?

Why is async_delete not working?

  • bucket is not right in logs
  • This is really misleading in web ui, because it says "success" although async_delete failed misserably.
  • Im using latest versions
  • Self hosted cleraml, self hosted s3
    image
    image
  
  
Posted one year ago
Votes Newest

Answers 80


Again, I'm telling you, please look at the documentation and what it says specifically on minio like solutions.
The host should be
host: " our-host.com :<PORT>"
And NOT
host: " s3.our-host.com "
Maybe you don't require a port I don't know your setup, but as I said, in the host settings you need to remove the s3 as this is reserved only to AWS S3.

  
  
Posted one year ago

Meaning that you should configure your host as follows host: "somehost.com:9000"

  
  
Posted one year ago

we do, yes. Changing it to https in settings doesnt help

  
  
Posted one year ago

@<1523701070390366208:profile|CostlyOstrich36> Still unable to understand what im doing wrong.
We have self hosted S3 Ceph storage server
Setting my config like this breaks task.init
image

  
  
Posted one year ago

In which ui? Because there are two ways to do it. When clicking on artifacti url there is a popup (but has no way to change host url)
Our s3 host doesnt have port (didnt specify port in clearml.conf anywhere and upload works)
image
image
image

  
  
Posted one year ago

i need clearml.conf on my clearml server (in config folder which is mounted in docker-compose) or user PC? Or Both?
Its self hosted S3 thats all I know, i dont think it s Minio

  
  
Posted one year ago

in the code, the output uri should be with None :<PORT>

  
  
Posted one year ago

@<1590514584836378624:profile|AmiableSeaturtle81> ok, I think that your credentials from clearml.conf are actually working now. let's not change them.
Now let's try this simple code:

from clearml import Task
import numpy as np


if __name__ == "__main__":
    task = Task.init(task_name="test4", project_name="test4", output_uri="
")
    image = np.random.randint(0, 256, size=(500, 1000, 3), dtype=np.uint8)
    task.upload_artifact("image", image)

You should change the task_name and project_name from test just in case some object has been created previously

  
  
Posted one year ago

there is a typing in clearm.conf i sent you on like 87, there should be "key" not "ey" im aware of it

  
  
Posted one year ago

The problem is that clearml.conf s3 config doesnt support empty region field, even empty strings crashes it

  
  
Posted one year ago

@<1523703436166565888:profile|DeterminedCrab71> Thanks for responding
It was unclear to me that I need to set 443 also everywhere in clearml.conf
Setting s3 host urls with 443 in clearml.conf and also in web UI made it work

Im now almost at the finish line. The last thing that would be great is to fix archived task deletion.
For some reason i have error of missing S3 keys in clearml docker compose logs, the folder / files are not deleted in S3 bucket.

You can see how storage_credentials.conf looks like for me (first image). The same as for client clearml.conf (with port as you suggested)

I have the storage_credentials.conf mounted inside of async_delete as a volume
I have also confirmed that mounting works and i have the storage_credentials.conf inside of async_delete container config folder.
Maybe im misconfiguring someting?
image

  
  
Posted one year ago

  • Here is how client side clearml.conf looks like together with the script im using to create the tasks. Uploads seems to work and is fixed thanks to you guys 🙌
    image
    image
    image
  
  
Posted one year ago

we use Ceph Storage Cluster, interface to it is the same as S3
I dont get what I have misconfigured.
The only thing I have not added is "region" field in clearml.conf because we literally dont have, its a self hosted cluster.
You can try and replicate this s3 config I have posted earlier.

  
  
Posted one year ago

you might want to prefix both the host in the configuration file and the uri in Task.init / StorageHelper.get with s3. if the script above works if you do that

  
  
Posted one year ago

Hi @<1590514584836378624:profile|AmiableSeaturtle81> , you need to add the port to the credentials when you input them in the webUI

  
  
Posted one year ago

I tried it with port, but still having the same issue
Tried it with/without secure and multipart
image
image
image

  
  
Posted one year ago

This is the link generated
image

  
  
Posted one year ago

  1. This is how web UI configurations looks like
    image
  
  
Posted one year ago

Bump, still waiting, closing in on a month since we are unable to deploy. We have team of 10+ people

  
  
Posted one year ago

I dont have a region. I guess I will wait till tomarrow then?

  
  
Posted one year ago

Hi @<1590514584836378624:profile|AmiableSeaturtle81> , you need to setup your s3 key/secret in clearml.conf
I suggest following this documentation - None

  
  
Posted one year ago

@<1590514584836378624:profile|AmiableSeaturtle81> , please see the section regarding minio in the documentation - None

  
  
Posted one year ago

@<1523701435869433856:profile|SmugDolphin23> Any news?

  
  
Posted one year ago

Can you add your full configurations again?

  
  
Posted one year ago

just append it to None : None in Task.init

  
  
Posted one year ago

Yes, credetials seems to work
Im trying to figure out not why I dont see the uploaded files / folders

  • I checked maybe clearml task uses fileserver instead but i dont see any files in fileserver folder
  • Nothing is uploaded in bucket (i will ask IT guy to check if im uploading any files in logs)
    image
  
  
Posted one year ago

Hi @<1590514584836378624:profile|AmiableSeaturtle81> , the hotfix should right around the corner 🙂

  
  
Posted one year ago

Hi @<1590514584836378624:profile|AmiableSeaturtle81> ! To help us debug this: are you able to simply use the boto3 python package to interact with your cluster?
If so, how does that code look like? This would give us some insight on how the config should actually look like or what changes need to be made.

  
  
Posted one year ago

We dont need a port
"s3" is part of url that is configured on our routers, without it we cannot connect

  
  
Posted one year ago

@<1523701070390366208:profile|CostlyOstrich36> Hello, im still unable to understand how to fix this

  
  
Posted one year ago