Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Why Is Async_Delete Not Working?

Why is async_delete not working?

  • bucket is not right in logs
  • This is really misleading in web ui, because it says "success" although async_delete failed misserably.
  • Im using latest versions
  • Self hosted cleraml, self hosted s3
    image
    image
  
  
Posted 4 months ago
Votes Newest

Answers 80


we use Ceph Storage Cluster, interface to it is the same as S3
I dont get what I have misconfigured.
The only thing I have not added is "region" field in clearml.conf because we literally dont have, its a self hosted cluster.
You can try and replicate this s3 config I have posted earlier.

  
  
Posted 3 months ago

I know these keys work, url and everything else works because I use these creds daily

  
  
Posted 3 months ago

The problem is that clearml.conf s3 config doesnt support empty region field, even empty strings crashes it

  
  
Posted 3 months ago

Hi @<1590514584836378624:profile|AmiableSeaturtle81> ! We have someone investigating the UI issue (I mainly work on the sdk). They will get back to you once they find something...

  
  
Posted 3 months ago

So from our IT guys i now know that
"s3" part of url is subdomain, we use it in all other libs like boto3 and cloudpathlib, never had any problems
This is where the crash happens inside the clearml Task
image

  
  
Posted 3 months ago

will it be appended in clearml?
"s3" is part of domain to the host

  
  
Posted 3 months ago

We dont need a port
"s3" is part of url that is configured on our routers, without it we cannot connect

  
  
Posted 3 months ago

In which ui? Because there are two ways to do it. When clicking on artifacti url there is a popup (but has no way to change host url)
Our s3 host doesnt have port (didnt specify port in clearml.conf anywhere and upload works)
image
image
image

  
  
Posted 2 months ago

there is a typing in clearm.conf i sent you on like 87, there should be "key" not "ey" im aware of it

  
  
Posted 3 months ago

@<1523701070390366208:profile|CostlyOstrich36> Hello, im still unable to understand how to fix this

  
  
Posted 2 months ago

host: "my-minio-host:9000"
  
  
Posted 3 months ago

py file:
task: clearml.Task = clearml.Task.init(
project_name="project",
task_name="task",
output_uri=" None ",
)

clearml.conf:
{
# This will apply to all buckets in this host (unless key/value is specifically provided for a given bucket)
host: " our-host.com "
key: "xxx"
secret: "xxx"
multipart: false
secure: true
}
image

  
  
Posted 3 months ago

This is unrelated to your routers. There are two things at play here. The configuration of WHERE the data will go - output_uri and the other is the clearml.conf that you need to setup with credentials. I am telling you, you are setting it wrong. Please follow documentation.

  
  
Posted 3 months ago

we might as well have "s5" there but it is needed there

  
  
Posted 3 months ago

Hi @<1590514584836378624:profile|AmiableSeaturtle81> , you need to add the port to the credentials when you input them in the webUI

  
  
Posted 2 months ago

@<1590514584836378624:profile|AmiableSeaturtle81> ok, I think that your credentials from clearml.conf are actually working now. let's not change them.
Now let's try this simple code:

from clearml import Task
import numpy as np


if __name__ == "__main__":
    task = Task.init(task_name="test4", project_name="test4", output_uri="
")
    image = np.random.randint(0, 256, size=(500, 1000, 3), dtype=np.uint8)
    task.upload_artifact("image", image)

You should change the task_name and project_name from test just in case some object has been created previously

  
  
Posted 3 months ago

good morning, I tried the script you provided and Im getting somewhere
image

  
  
Posted 3 months ago

@<1590514584836378624:profile|AmiableSeaturtle81> weren't you using https for the s3 host? maybe the issue has something to do with that?

  
  
Posted 3 months ago

ok, slight update. It seems like artifacts are uploading now to bucket. Maybe my folder explorer used old cache or something.
However, reported images are uploaded to fileserver instead of s3

here is the script im using to test things. Thanks
image
image

  
  
Posted 3 months ago

unable to see the images with that link tho

  
  
Posted 3 months ago

This is the link generated
image

  
  
Posted 3 months ago

Meaning that you should configure your host as follows host: "somehost.com:9000"

  
  
Posted 3 months ago

I tried it with port, but still having the same issue
Tried it with/without secure and multipart
image
image
image

  
  
Posted 3 months ago

It looks like im moving forward

Setting url in clearml.conf without "s3" as suggested works (But I dont add port ther, not sure if it breaks something, we dont have a port)
host: " our-host.com "

Then in test_task.py
task: clearml.Task = clearml.Task.init(
project_name="project",
task_name="task",
output_uri=" None ",
)

I think connection is created
What im getting now is bucket error, i suppose I have to specify it somewhere?
image

  
  
Posted 3 months ago

@<1523701435869433856:profile|SmugDolphin23> Any ideas how to fix this?

  
  
Posted 3 months ago

  1. This is how web UI configurations looks like
    image
  
  
Posted 2 months ago

maybe someone on your end can try to parse such a config and see if they also have the same problem

  
  
Posted 3 months ago

what about this script (replace with your creds, comment out creds in clearml.conf for now)

from clearml import Task
from clearml.storage.helper import StorageHelper

task = Task.init("test", "test")
task.setup_aws_upload(
    bucket="bucket1",
    host="localhost:9000",
    key="",
    secret="",
    profile=None,
    secure=True
)
helper = StorageHelper.get("
")
  
  
Posted 3 months ago

But there are stil some wierd issues, i cannot see the files uploaded in bucket

  
  
Posted 3 months ago

Can you actually add the bucket to the credentials just to try it out?
Also, can you check that this snippet works for you (with your creds):

import boto3
import json
import six

key = ""
secret = ""
host = "our_host.com"
bucket_name = "bucket"
profile = None
filename = "test"
data = {"test": "data"}

boto_session = boto3.Session(aws_access_key_id=key, aws_secret_access_key=secret, profile_name=profile)
endpoint = "https://" + host
boto_resource = boto_session.resource("s3", region_name=None, endpoint_url=endpoint)
bucket = boto_resource.Bucket(bucket_name)
bucket.put_object(Key=filename, Body=six.b(json.dumps(data)))
  
  
Posted 3 months ago
12K Views
80 Answers
4 months ago
2 months ago
Tags
Similar posts