Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Why Is Async_Delete Not Working?

Why is async_delete not working?

  • bucket is not right in logs
  • This is really misleading in web ui, because it says "success" although async_delete failed misserably.
  • Im using latest versions
  • Self hosted cleraml, self hosted s3
    image
    image
  
  
Posted 9 months ago
Votes Newest

Answers 80


I dont have a region. I guess I will wait till tomarrow then?

  
  
Posted 9 months ago

@<1523701070390366208:profile|CostlyOstrich36> Hello, im still unable to understand how to fix this

  
  
Posted 8 months ago

It looks like im moving forward

Setting url in clearml.conf without "s3" as suggested works (But I dont add port ther, not sure if it breaks something, we dont have a port)
host: " our-host.com "

Then in test_task.py
task: clearml.Task = clearml.Task.init(
project_name="project",
task_name="task",
output_uri=" None ",
)

I think connection is created
What im getting now is bucket error, i suppose I have to specify it somewhere?
image

  
  
Posted 9 months ago

Hi @<1590514584836378624:profile|AmiableSeaturtle81> ! To help us debug this: are you able to simply use the boto3 python package to interact with your cluster?
If so, how does that code look like? This would give us some insight on how the config should actually look like or what changes need to be made.

  
  
Posted 9 months ago

removing it doesnt fix the problem

  
  
Posted 9 months ago

image

  
  
Posted 8 months ago

This is unrelated to your routers. There are two things at play here. The configuration of WHERE the data will go - output_uri and the other is the clearml.conf that you need to setup with credentials. I am telling you, you are setting it wrong. Please follow documentation.

  
  
Posted 9 months ago

@<1523701435869433856:profile|SmugDolphin23> Setting it without http is not possible as it auto fills them back in

  
  
Posted 8 months ago

host: "my-minio-host:9000"

The port should be whatever port that is used by your S3 solution

  
  
Posted 9 months ago

digging deeper it seems like a parsing issue
image

  
  
Posted 9 months ago

@<1523701435869433856:profile|SmugDolphin23> Any news?

  
  
Posted 8 months ago

Yes, credetials seems to work
Im trying to figure out not why I dont see the uploaded files / folders

  • I checked maybe clearml task uses fileserver instead but i dont see any files in fileserver folder
  • Nothing is uploaded in bucket (i will ask IT guy to check if im uploading any files in logs)
    image
  
  
Posted 9 months ago

maybe someone on your end can try to parse such a config and see if they also have the same problem

  
  
Posted 9 months ago

clearml.conf is a fresh one i did clearml-init to make sure

  
  
Posted 9 months ago

  1. This is how web UI configurations looks like
    image
  
  
Posted 8 months ago

btw @<1590514584836378624:profile|AmiableSeaturtle81> , can you try to specify the host without http* and try to set the port to 443? like s3.my _host:443 (or even without the port)

  
  
Posted 8 months ago

Hi @<1590514584836378624:profile|AmiableSeaturtle81> self hosted S3 service must specify the protocol (http/https) and port, even for the default ones (80 / 443).

  
  
Posted 8 months ago

I do have write permissions

  
  
Posted 9 months ago

s

  
  
Posted 8 months ago

@<1590514584836378624:profile|AmiableSeaturtle81> , please see the section regarding minio in the documentation - None

  
  
Posted 9 months ago

Hi @<1590514584836378624:profile|AmiableSeaturtle81> , the hotfix should right around the corner 🙂

  
  
Posted 9 months ago

we use Ceph Storage Cluster, interface to it is the same as S3
I dont get what I have misconfigured.
The only thing I have not added is "region" field in clearml.conf because we literally dont have, its a self hosted cluster.
You can try and replicate this s3 config I have posted earlier.

  
  
Posted 9 months ago

Again, I'm telling you, please look at the documentation and what it says specifically on minio like solutions.
The host should be
host: " our-host.com :<PORT>"
And NOT
host: " s3.our-host.com "
Maybe you don't require a port I don't know your setup, but as I said, in the host settings you need to remove the s3 as this is reserved only to AWS S3.

  
  
Posted 9 months ago

Also, is it an AWS S3 or is it some similar storage solution like Minio?

  
  
Posted 9 months ago

I cant get the conf credentials to work
Specifying it like this gives me:
Exception has occurred: ValueError
Could not get access credentials for ' None ' , check configuration file ~/clearml.conf
image

  
  
Posted 9 months ago

i need clearml.conf on my clearml server (in config folder which is mounted in docker-compose) or user PC? Or Both?
Its self hosted S3 thats all I know, i dont think it s Minio

  
  
Posted 9 months ago

Specifying it like this, gets me different error:

Exception has occurred: ValueError

  • Insufficient permissions (delete failed) for None
    botocore.exceptions.ClientError: An error occurred (IllegalLocationConstraintException) when calling the DeleteObject operation: The me-south-1 location constraint is incompatible for the region specific endpoint this request was sent to.

During handling of the above exception, another exception occurred:

File "/home/ma/src/clearml-server/task_test.py", line 10, in <module>
task: clearml.Task = clearml.Task.init(
ValueError: Insufficient permissions (delete failed) for None
image

  
  
Posted 9 months ago

@<1523701070390366208:profile|CostlyOstrich36> Any news on this? We are currently stuck without this fix, cant finish up clearml setup

  
  
Posted 9 months ago

Hi, ok im really close now to working system
Debug image is uploading to s3, im seeing the files, all ok there

Problem now is viewing these images in web UI
Going to Debug Samples panel in Task drops me a popup to fill in s3 credentials

I cant figure out what the right setup is for the creds to work
This is what I have now (Note that we dont have region)
image

  
  
Posted 9 months ago

also, when uploading artifacts, I see where they are stored on the s3 bucket, but I cant find where the debug images are stored at

  
  
Posted 9 months ago