Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
I Configured S3 Storage In My Clearml.Conf File On A Worker Machine. Then I Run Experiment Which Produced A Small Artifact And It Doesn'T Appear In My Cloud Storage. What Am I Doing Wrong? How To Make Artifacts Appear On My S3 Storage? Below Is A Sample O

I configured S3 storage in my clearml.conf file on a worker machine. Then I run experiment which produced a small artifact and it doesn't appear in my cloud storage. What am I doing wrong? How to make artifacts appear on my S3 storage?
Below is a sample of clearml.conf with S3 configuration.
s3 { key: "mykey" secret: "mysecret" region: "myendpoint" credentials: [ specifies key/secret credentials to use when handling s3 urls (read or write) { bucket: "mybucket" key: "mykey" secret: "mysecret" }, ] }

  
  
Posted 2 years ago
Votes Newest

Answers 41


I think that will work, but I'm not sure actually. I know for sure that something like us-east-2 is supported

  
  
Posted 2 years ago

A bit overwhelmed by configuration, since it has an agent, a server and bunch of configuration files, easy to mess up

  
  
Posted 2 years ago

Can you share a snippet?

  
  
Posted 2 years ago

check the output_uri parameter in Task.init

  
  
Posted 2 years ago

SuccessfulKoala55 Hey, Jake, getting back to you. I couldn't be able to resolve my issue. I can access my bucket by any means just fine, e.g. by S3 CLI client. All the tools I use require 4 params: AK, SK, endpoint, bucket. I wonder why ClearML doesn't have explicit endpoint parameter and you have to use output_uri for it and why is there a region when other tools don't require it.

  
  
Posted 2 years ago

SmugDolphin23 I added a region, run experiment again. Didn't work

  
  
Posted 2 years ago

BoredBat47 Just to check if u need to do update-ca-certificates or equivalent?

  
  
Posted 2 years ago

I meant the code where you upload an artifact, sorry

  
  
Posted 2 years ago

SuccessfulKoala55 Fixed it by setting env var with path to certificates. I was sure that wouldn't help since I can curl and python get request to my endpoint from shell just fine. Now it says I am missing security headers, seems it's something on my side. Will try to fix this

  
  
Posted 2 years ago

SmugDolphin23 SuccessfulKoala55
2023-02-03 20:38:14,515 - clearml.metrics - WARNING - Failed uploading to <my-endpoint> (HTTPSConnectionPool(host=' e ndpoint', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1131)'))))
2023-02-03 20:38:14,517 - clearml.metrics - ERROR - Not uploading 1/2 events because the data upload failed

  
  
Posted 2 years ago

BoredBat47 Yeah. This is an example:

 s3 {
            key: "mykey"
            secret: "mysecret"
            region: "us-east-1"
            credentials: [
                 {
                     bucket: "
"
                     key: "mykey"
                     secret: "mysecret"
                    region: "us-east-1"
                  },
            ]
}
# some other config
default_output_uri: "
"
  
  
Posted 2 years ago

How can you have a certificate error if you're using S3? I'm sure their certificate is OK...

  
  
Posted 2 years ago

BoredBat47 How would you connect with boto3 ? ClearML uses boto3 as well, what it basically does is getting the key/secret/region from the conf file. After that it opens a Session with the credentials. Have you tried deleting the region altogether from the conf file?

  
  
Posted 2 years ago

OK. Bt the way, you can find the region from the AWS dashabord

  
  
Posted 2 years ago

OddShrimp85 I fixed my SSL error by putting REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt in .bashrc file

  
  
Posted 2 years ago

Yeah, that's always the case with complex systems 😕

  
  
Posted 2 years ago

` s3 {
# S3 credentials, used for read/write access by various SDK elements

        # default, used for any bucket not specified below
        key: "mykey"
        secret: "mysecret"
        region: " ` ` "

        credentials: [

             {
                 bucket: "mybucket"
                 key: "mykey"
                 secret: "mysecret"
                 region: " ` ` "
              }, `
  
  
Posted 2 years ago

SuccessfulKoala55 Could you provide a sample of how to properly fill all the necessary config values to make S3 work, please?
My endpoint starts with https:// and I don't know what my region is, endpoint URL doesn't contain it.
Right now I fill it like this:

aws.s3.key = <access-key>
aws.s3.secret = <secret-key>
aws.s3.region = <blank>
aws.s3.credentials.0.bucket = <just_bucket_name>
aws.s3.credentials.0.key = <access-key>
aws.s3.credentials.0.secret = <secret-key>
sdk.development.default_output_uri = <
>
  
  
Posted 2 years ago

it's the same file you added your s3 creds to

  
  
Posted 2 years ago

SmugDolphin23 Thank you very much!
That's clearml.conf for ClearML end users right?

  
  
Posted 2 years ago

SmugDolphin23 I didn't use a region at first and that was not working. Now I use a region and it still doesn't work.
From the boto3 inside a Python I could create a session where I specify ak and sk, and create a client from the session where I pass service_name and endpoint_url. It works just fine

  
  
Posted 2 years ago

And I believe that by default we send artifacts to the clearml server if not specified

  
  
Posted 2 years ago

OddShrimp85 I haven't done it, for me it worked as-is

  
  
Posted 2 years ago

BoredBat47 the bucket name in your case should just be somebucket (and should not start with s3:// )

  
  
Posted 2 years ago

SmugDolphin23 Sorry to bother again, output_uri should be a URI to S3 endpoint or clear ml fileserver? If it's not provided artifacts are stored locally, right?

  
  
Posted 2 years ago

SmugDolphin23

  
  
Posted 2 years ago

SmugDolphin23 I actually don't know where to get my region for the creds to S3 I am using. From what I figured, I have to plug in my sk, ak and bucket into credentials in agent and output URI must be my S3 endpoint — complete URI with protocol. Is it correct?

  
  
Posted 2 years ago

Hi again, BoredBat47 ! I actually took a closer look at this. The config file should look like this:

        s3 {
            key: "KEY"
            secret: "SECRET"
            use_credentials_chain: false

            credentials: [
                {
                    host: "myendpoint:443"  # no http(s):// and no s3:// prefix, also no bucket name
                    key: "KEY"
                    secret: "SECRET"
                    secure: true  # if https
                },
            ]
        }
        default_output_uri: "
"  # notice the s3:// prefix (not http(s))

The region should be optional, but try setting it as well if it doesn't work

  
  
Posted 2 years ago

Oh, it's configured o agent machine, got you

  
  
Posted 2 years ago

` from random import random
from clearml import Task, TaskTypes

args = {}
task: Task = Task.init(
project_name="My Proj",
task_name='Sample task',
task_type=TaskTypes.inference,
auto_connect_frameworks=False
)
task.connect(args)
task.execute_remotely(queue_name="default")
value = random()
task.get_logger().report_single_value(name="sample_value", value=value)
with open("some_artifact.txt", "w") as f:
f.write(f"Some random value: {value}\n")
task.upload_artifact(name="test_artifact", artifact_object="some_artifact.txt") `

  
  
Posted 2 years ago
93K Views
41 Answers
2 years ago
2 years ago
Tags
Similar posts