Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Why Is Async_Delete Not Working?

Why is async_delete not working?

  • bucket is not right in logs
  • This is really misleading in web ui, because it says "success" although async_delete failed misserably.
  • Im using latest versions
  • Self hosted cleraml, self hosted s3
    image
    image
  
  
Posted one year ago
Votes Newest

Answers 80


Also, is it an AWS S3 or is it some similar storage solution like Minio?

  
  
Posted one year ago

also, if I try to set the url to None it auto replaces it with None : None

  
  
Posted one year ago

CostlyOstrich36 Hello, im still unable to understand how to fix this

  
  
Posted one year ago

Hi AmiableSeaturtle81 self hosted S3 service must specify the protocol (http/https) and port, even for the default ones (80 / 443).

  
  
Posted one year ago

AmiableSeaturtle81 if you wish for you debug samples to be uploaded to s3 you have 2 options: you either use this function: None
or you can change the api.files_server entry to your s3 bucket in clearml.conf . This way you wouldn't need to call set_default_upload_destination every time you run a new script.
Also, in clearml.conf , you can change sdk.development.default_output_uri such that you don't need to set output_uri="s3://... every time in Task.init

  
  
Posted one year ago

But there are stil some wierd issues, i cannot see the files uploaded in bucket

  
  
Posted one year ago

  • Here is how client side clearml.conf looks like together with the script im using to create the tasks. Uploads seems to work and is fixed thanks to you guys 🙌
    image
    image
    image
  
  
Posted one year ago

SmugDolphin23 Any news?

  
  
Posted one year ago

Hi AmiableSeaturtle81 , any non-AWS S3-like storage must have a port in this setup, how did you configure the SDK?
Also the two ways you're showing are the same - the popup will fill in the details in the settings page

  
  
Posted one year ago

Hi AmiableSeaturtle81 , you need to add the port to the credentials when you input them in the webUI

  
  
Posted one year ago

btw AmiableSeaturtle81 , can you try to specify the host without http* and try to set the port to 443? like s3.my _host:443 (or even without the port)

  
  
Posted one year ago

This is the link generated
image

  
  
Posted one year ago

ok, slight update. It seems like artifacts are uploading now to bucket. Maybe my folder explorer used old cache or something.
However, reported images are uploaded to fileserver instead of s3

here is the script im using to test things. Thanks
image
image

  
  
Posted one year ago

we do, yes. Changing it to https in settings doesnt help

  
  
Posted one year ago

py file:
task: clearml.Task = clearml.Task.init(
project_name="project",
task_name="task",
output_uri=" None ",
)

clearml.conf:
{
# This will apply to all buckets in this host (unless key/value is specifically provided for a given bucket)
host: " our-host.com "
key: "xxx"
secret: "xxx"
multipart: false
secure: true
}
image

  
  
Posted one year ago

The problem is that clearml.conf s3 config doesnt support empty region field, even empty strings crashes it

  
  
Posted one year ago

It looks like im moving forward

Setting url in clearml.conf without "s3" as suggested works (But I dont add port ther, not sure if it breaks something, we dont have a port)
host: " our-host.com "

Then in test_task.py
task: clearml.Task = clearml.Task.init(
project_name="project",
task_name="task",
output_uri=" None ",
)

I think connection is created
What im getting now is bucket error, i suppose I have to specify it somewhere?
image

  
  
Posted one year ago

good morning, I tried the script you provided and Im getting somewhere
image

  
  
Posted one year ago

SmugDolphin23 Setting it without http is not possible as it auto fills them back in

  
  
Posted one year ago

Hi, ok im really close now to working system
Debug image is uploading to s3, im seeing the files, all ok there

Problem now is viewing these images in web UI
Going to Debug Samples panel in Task drops me a popup to fill in s3 credentials

I cant figure out what the right setup is for the creds to work
This is what I have now (Note that we dont have region)
image

  
  
Posted one year ago

in the code, the output uri should be with None :<PORT>

  
  
Posted one year ago

Again, I'm telling you, please look at the documentation and what it says specifically on minio like solutions.
The host should be
host: " our-host.com :<PORT>"
And NOT
host: " s3.our-host.com "
Maybe you don't require a port I don't know your setup, but as I said, in the host settings you need to remove the s3 as this is reserved only to AWS S3.

  
  
Posted one year ago

I cant get the conf credentials to work
Specifying it like this gives me:
Exception has occurred: ValueError
Could not get access credentials for ' None ' , check configuration file ~/clearml.conf
image

  
  
Posted one year ago

image

  
  
Posted one year ago

This is unrelated to your routers. There are two things at play here. The configuration of WHERE the data will go - output_uri and the other is the clearml.conf that you need to setup with credentials. I am telling you, you are setting it wrong. Please follow documentation.

  
  
Posted one year ago

there is a typing in clearm.conf i sent you on like 87, there should be "key" not "ey" im aware of it

  
  
Posted one year ago

File is written
image
image

  
  
Posted one year ago

AmiableSeaturtle81 ok, I think that your credentials from clearml.conf are actually working now. let's not change them.
Now let's try this simple code:

from clearml import Task
import numpy as np


if __name__ == "__main__":
    task = Task.init(task_name="test4", project_name="test4", output_uri="
")
    image = np.random.randint(0, 256, size=(500, 1000, 3), dtype=np.uint8)
    task.upload_artifact("image", image)

You should change the task_name and project_name from test just in case some object has been created previously

  
  
Posted one year ago

As I wrote, you need to remove the s3 from the start of the host section..

  
  
Posted one year ago

Or whatever port you use

  
  
Posted one year ago