Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Why Is Async_Delete Not Working?

Why is async_delete not working?

  • bucket is not right in logs
  • This is really misleading in web ui, because it says "success" although async_delete failed misserably.
  • Im using latest versions
  • Self hosted cleraml, self hosted s3
    image
    image
  
  
Posted 11 months ago
Votes Newest

Answers 80


removing it doesnt fix the problem

  
  
Posted 10 months ago

Again, I'm telling you, please look at the documentation and what it says specifically on minio like solutions.
The host should be
host: " our-host.com :<PORT>"
And NOT
host: " s3.our-host.com "
Maybe you don't require a port I don't know your setup, but as I said, in the host settings you need to remove the s3 as this is reserved only to AWS S3.

  
  
Posted 10 months ago

image

  
  
Posted 10 months ago

This is the link generated
image

  
  
Posted 10 months ago

As I wrote, you need to remove the s3 from the start of the host section..

  
  
Posted 10 months ago

Bump, still waiting, closing in on a month since we are unable to deploy. We have team of 10+ people

  
  
Posted 10 months ago

I tried it with port, but still having the same issue
Tried it with/without secure and multipart
image
image
image

  
  
Posted 10 months ago

ok, slight update. It seems like artifacts are uploading now to bucket. Maybe my folder explorer used old cache or something.
However, reported images are uploaded to fileserver instead of s3

here is the script im using to test things. Thanks
image
image

  
  
Posted 10 months ago

Can you add your full configurations again?

  
  
Posted 10 months ago

Adding bucket in clearml.conf causes the same error: clearml.storage - ERROR - Failed uploading: Could not connect to the endpoint URL: " None "
image
image
image

  
  
Posted 10 months ago

host: "my-minio-host:9000"
  
  
Posted 10 months ago

We dont need a port
"s3" is part of url that is configured on our routers, without it we cannot connect

  
  
Posted 10 months ago

we do, yes. Changing it to https in settings doesnt help

  
  
Posted 10 months ago

Hi @<1590514584836378624:profile|AmiableSeaturtle81> ! To help us debug this: are you able to simply use the boto3 python package to interact with your cluster?
If so, how does that code look like? This would give us some insight on how the config should actually look like or what changes need to be made.

  
  
Posted 10 months ago

in the code, the output uri should be with None :<PORT>

  
  
Posted 10 months ago

So from our IT guys i now know that
"s3" part of url is subdomain, we use it in all other libs like boto3 and cloudpathlib, never had any problems
This is where the crash happens inside the clearml Task
image

  
  
Posted 10 months ago

Specifying it like this, gets me different error:

Exception has occurred: ValueError

  • Insufficient permissions (delete failed) for None
    botocore.exceptions.ClientError: An error occurred (IllegalLocationConstraintException) when calling the DeleteObject operation: The me-south-1 location constraint is incompatible for the region specific endpoint this request was sent to.

During handling of the above exception, another exception occurred:

File "/home/ma/src/clearml-server/task_test.py", line 10, in <module>
task: clearml.Task = clearml.Task.init(
ValueError: Insufficient permissions (delete failed) for None
image

  
  
Posted 11 months ago

Can you actually add the bucket to the credentials just to try it out?
Also, can you check that this snippet works for you (with your creds):

import boto3
import json
import six

key = ""
secret = ""
host = "our_host.com"
bucket_name = "bucket"
profile = None
filename = "test"
data = {"test": "data"}

boto_session = boto3.Session(aws_access_key_id=key, aws_secret_access_key=secret, profile_name=profile)
endpoint = "https://" + host
boto_resource = boto_session.resource("s3", region_name=None, endpoint_url=endpoint)
bucket = boto_resource.Bucket(bucket_name)
bucket.put_object(Key=filename, Body=six.b(json.dumps(data)))
  
  
Posted 10 months ago

it looks like problem is the host field, whenever I add it I get:
2024-01-22 13:27:16,489 - clearml.storage - ERROR - Failed creating storage object None Reason: Missing key and secret for S3 storage access ( None )

  
  
Posted 11 months ago

@<1523701435869433856:profile|SmugDolphin23> Any news?

  
  
Posted 9 months ago

File is written
image
image

  
  
Posted 10 months ago

It looks like im moving forward

Setting url in clearml.conf without "s3" as suggested works (But I dont add port ther, not sure if it breaks something, we dont have a port)
host: " our-host.com "

Then in test_task.py
task: clearml.Task = clearml.Task.init(
project_name="project",
task_name="task",
output_uri=" None ",
)

I think connection is created
What im getting now is bucket error, i suppose I have to specify it somewhere?
image

  
  
Posted 10 months ago

host: "my-minio-host:9000"

The port should be whatever port that is used by your S3 solution

  
  
Posted 10 months ago

Hi, ok im really close now to working system
Debug image is uploading to s3, im seeing the files, all ok there

Problem now is viewing these images in web UI
Going to Debug Samples panel in Task drops me a popup to fill in s3 credentials

I cant figure out what the right setup is for the creds to work
This is what I have now (Note that we dont have region)
image

  
  
Posted 10 months ago

what about this script (replace with your creds, comment out creds in clearml.conf for now)

from clearml import Task
from clearml.storage.helper import StorageHelper

task = Task.init("test", "test")
task.setup_aws_upload(
    bucket="bucket1",
    host="localhost:9000",
    key="",
    secret="",
    profile=None,
    secure=True
)
helper = StorageHelper.get("
")
  
  
Posted 10 months ago

Hi @<1590514584836378624:profile|AmiableSeaturtle81> , you need to add the port to the credentials when you input them in the webUI

  
  
Posted 9 months ago

This is unrelated to your routers. There are two things at play here. The configuration of WHERE the data will go - output_uri and the other is the clearml.conf that you need to setup with credentials. I am telling you, you are setting it wrong. Please follow documentation.

  
  
Posted 10 months ago

@<1590514584836378624:profile|AmiableSeaturtle81> if you wish for you debug samples to be uploaded to s3 you have 2 options: you either use this function: None
or you can change the api.files_server entry to your s3 bucket in clearml.conf . This way you wouldn't need to call set_default_upload_destination every time you run a new script.
Also, in clearml.conf , you can change sdk.development.default_output_uri such that you don't need to set output_uri="s3://... every time in Task.init

  
  
Posted 10 months ago

Also, is it an AWS S3 or is it some similar storage solution like Minio?

  
  
Posted 11 months ago

I do have write permissions

  
  
Posted 10 months ago