Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Why Is Async_Delete Not Working?

Why is async_delete not working?

  • bucket is not right in logs
  • This is really misleading in web ui, because it says "success" although async_delete failed misserably.
  • Im using latest versions
  • Self hosted cleraml, self hosted s3
    image
    image
  
  
Posted one year ago
Votes Newest

Answers 80


Hi @<1590514584836378624:profile|AmiableSeaturtle81> , the hotfix should right around the corner 🙂

  
  
Posted one year ago

This is unrelated to your routers. There are two things at play here. The configuration of WHERE the data will go - output_uri and the other is the clearml.conf that you need to setup with credentials. I am telling you, you are setting it wrong. Please follow documentation.

  
  
Posted one year ago

removing it doesnt fix the problem

  
  
Posted one year ago

As I wrote, you need to remove the s3 from the start of the host section..

  
  
Posted one year ago

image

  
  
Posted 12 months ago

Yes, credetials seems to work
Im trying to figure out not why I dont see the uploaded files / folders

  • I checked maybe clearml task uses fileserver instead but i dont see any files in fileserver folder
  • Nothing is uploaded in bucket (i will ask IT guy to check if im uploading any files in logs)
    image
  
  
Posted one year ago

Or whatever port you use

  
  
Posted one year ago

It looks like im moving forward

Setting url in clearml.conf without "s3" as suggested works (But I dont add port ther, not sure if it breaks something, we dont have a port)
host: " our-host.com "

Then in test_task.py
task: clearml.Task = clearml.Task.init(
project_name="project",
task_name="task",
output_uri=" None ",
)

I think connection is created
What im getting now is bucket error, i suppose I have to specify it somewhere?
image

  
  
Posted one year ago

@<1523701070390366208:profile|CostlyOstrich36> Hello John, we are still unable to use clearml with our self hosted s3 CEPH instances, is there any update on the hotfix for 1.14?

  
  
Posted one year ago

@<1523701435869433856:profile|SmugDolphin23> Setting it without http is not possible as it auto fills them back in

  
  
Posted 11 months ago

good morning, I tried the script you provided and Im getting somewhere
image

  
  
Posted one year ago

Can you add your full configurations again?

  
  
Posted one year ago

Hey, i see that 1.14.2 dropped
I tried it but the issue is still there, maybe the hotfix is in next patch?

Here is the setup so you can reproduce it (we dont have region field)
clearml.conf:
s3 {
use_credentials_chain: false
credentials: [
{
host: " s3.somehost.com "
key: "XXXXXXXXXXXXXXXXXXXX"
secret: "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
bucket: "rnd-dev"
},
]
}

test.py

task: clearml.Task = clearml.Task.init(
project_name="project",
task_name="task",
output_uri=" None ",
)

2024-02-08 11:23:52,150 - clearml.storage - ERROR - Failed creating storage object None Reason: Missing key and secret for S3 storage access ( None )

  
  
Posted one year ago

@<1590514584836378624:profile|AmiableSeaturtle81> , please see the section regarding minio in the documentation - None

  
  
Posted one year ago

unable to see the images with that link tho

  
  
Posted 12 months ago

Can you actually add the bucket to the credentials just to try it out?
Also, can you check that this snippet works for you (with your creds):

import boto3
import json
import six

key = ""
secret = ""
host = "our_host.com"
bucket_name = "bucket"
profile = None
filename = "test"
data = {"test": "data"}

boto_session = boto3.Session(aws_access_key_id=key, aws_secret_access_key=secret, profile_name=profile)
endpoint = "https://" + host
boto_resource = boto_session.resource("s3", region_name=None, endpoint_url=endpoint)
bucket = boto_resource.Bucket(bucket_name)
bucket.put_object(Key=filename, Body=six.b(json.dumps(data)))
  
  
Posted one year ago

also, if I try to set the url to None it auto replaces it with None : None

  
  
Posted 11 months ago

We dont need a port
"s3" is part of url that is configured on our routers, without it we cannot connect

  
  
Posted one year ago

maybe someone on your end can try to parse such a config and see if they also have the same problem

  
  
Posted one year ago

clearml.conf is a fresh one i did clearml-init to make sure

  
  
Posted one year ago

just append it to None : None in Task.init

  
  
Posted one year ago

digging deeper it seems like a parsing issue
image

  
  
Posted one year ago

@<1523701070390366208:profile|CostlyOstrich36> Hello, im still unable to understand how to fix this

  
  
Posted 11 months ago

Hi, ok im really close now to working system
Debug image is uploading to s3, im seeing the files, all ok there

Problem now is viewing these images in web UI
Going to Debug Samples panel in Task drops me a popup to fill in s3 credentials

I cant figure out what the right setup is for the creds to work
This is what I have now (Note that we dont have region)
image

  
  
Posted one year ago

Hi @<1590514584836378624:profile|AmiableSeaturtle81> , any non-AWS S3-like storage must have a port in this setup, how did you configure the SDK?
Also the two ways you're showing are the same - the popup will fill in the details in the settings page

  
  
Posted 11 months ago

Hi @<1590514584836378624:profile|AmiableSeaturtle81> ! To help us debug this: are you able to simply use the boto3 python package to interact with your cluster?
If so, how does that code look like? This would give us some insight on how the config should actually look like or what changes need to be made.

  
  
Posted one year ago

@<1590514584836378624:profile|AmiableSeaturtle81> weren't you using https for the s3 host? maybe the issue has something to do with that?

  
  
Posted one year ago

Hi @<1590514584836378624:profile|AmiableSeaturtle81> self hosted S3 service must specify the protocol (http/https) and port, even for the default ones (80 / 443).

  
  
Posted 11 months ago

@<1523701435869433856:profile|SmugDolphin23> Any news?

  
  
Posted 11 months ago

But there are stil some wierd issues, i cannot see the files uploaded in bucket

  
  
Posted one year ago