Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
I Configured S3 Storage In My Clearml.Conf File On A Worker Machine. Then I Run Experiment Which Produced A Small Artifact And It Doesn'T Appear In My Cloud Storage. What Am I Doing Wrong? How To Make Artifacts Appear On My S3 Storage? Below Is A Sample O

I configured S3 storage in my clearml.conf file on a worker machine. Then I run experiment which produced a small artifact and it doesn't appear in my cloud storage. What am I doing wrong? How to make artifacts appear on my S3 storage?
Below is a sample of clearml.conf with S3 configuration.
s3 { key: "mykey" secret: "mysecret" region: "myendpoint" credentials: [ specifies key/secret credentials to use when handling s3 urls (read or write) { bucket: "mybucket" key: "mykey" secret: "mysecret" }, ] }

  
  
Posted 2 years ago
Votes Newest

Answers 41


it's the same file you added your s3 creds to

  
  
Posted 2 years ago

` from random import random
from clearml import Task, TaskTypes

args = {}
task: Task = Task.init(
project_name="My Proj",
task_name='Sample task',
task_type=TaskTypes.inference,
auto_connect_frameworks=False
)
task.connect(args)
task.execute_remotely(queue_name="default")
value = random()
task.get_logger().report_single_value(name="sample_value", value=value)
with open("some_artifact.txt", "w") as f:
f.write(f"Some random value: {value}\n")
task.upload_artifact(name="test_artifact", artifact_object="some_artifact.txt") `

  
  
Posted 2 years ago

SmugDolphin23 Sorry to bother again, output_uri should be a URI to S3 endpoint or clear ml fileserver? If it's not provided artifacts are stored locally, right?

  
  
Posted 2 years ago

OK. Bt the way, you can find the region from the AWS dashabord

  
  
Posted 2 years ago

I think that will work, but I'm not sure actually. I know for sure that something like us-east-2 is supported

  
  
Posted 2 years ago

SmugDolphin23 Got it. Now I am a bit confused about region parameter in s3 section. Amazon docs say that region could be a regular URL with protocol like https://etc.etc which my endpoint actually is. I plugged it in s3 section in clearml.conf. Should it stay that way?

  
  
Posted 2 years ago

Could you try adding region under credentials as well?

  
  
Posted 2 years ago

SmugDolphin23 Thank you very much!
That's clearml.conf for ClearML end users right?

  
  
Posted 2 years ago

Can you share a snippet?

  
  
Posted 2 years ago

The code is run from another machine where clearml.conf configured to connect to ClearML server, no other configurations are provided

  
  
Posted 2 years ago

May I know where to set the cert to in env variable?

  
  
Posted 2 years ago

@<1526734383564722176:profile|BoredBat47> Just to check if u need to do update-ca-certificates or equivalent?

  
  
Posted 2 years ago

@<1523701435869433856:profile|SmugDolphin23> Hello, again! I tried to fill the values by your example. Still no luck. I noticed console log on my task says that I have certificate error. I disabled it in api section in clearml.conf like this: verify_certificate = false and I still have SSL error. Any clues why would that be?

  
  
Posted 2 years ago

@<1523701304709353472:profile|OddShrimp85> I haven't done it, for me it worked as-is

  
  
Posted 2 years ago

A bit overwhelmed by configuration, since it has an agent, a server and bunch of configuration files, easy to mess up

  
  
Posted 2 years ago

` s3 {
# S3 credentials, used for read/write access by various SDK elements

        # default, used for any bucket not specified below
        key: "mykey"
        secret: "mysecret"
        region: " ` ` "

        credentials: [

             {
                 bucket: "mybucket"
                 key: "mykey"
                 secret: "mysecret"
                 region: " ` ` "
              }, `
  
  
Posted 2 years ago

How can you have a certificate error if you're using S3? I'm sure their certificate is OK...

  
  
Posted 2 years ago

@<1523701435869433856:profile|SmugDolphin23> I actually don't know where to get my region for the creds to S3 I am using. From what I figured, I have to plug in my sk, ak and bucket into credentials in agent and output URI must be my S3 endpoint — complete URI with protocol. Is it correct?

  
  
Posted 2 years ago

@<1523701087100473344:profile|SuccessfulKoala55> Could you provide a sample of how to properly fill all the necessary config values to make S3 work, please?
My endpoint starts with https:// and I don't know what my region is, endpoint URL doesn't contain it.
Right now I fill it like this:

aws.s3.key = <access-key>
aws.s3.secret = <secret-key>
aws.s3.region = <blank>
aws.s3.credentials.0.bucket = <just_bucket_name>
aws.s3.credentials.0.key = <access-key>
aws.s3.credentials.0.secret = <secret-key>
sdk.development.default_output_uri = <
>
  
  
Posted 2 years ago

@<1523701087100473344:profile|SuccessfulKoala55> Fixed it by setting env var with path to certificates. I was sure that wouldn't help since I can curl and python get request to my endpoint from shell just fine. Now it says I am missing security headers, seems it's something on my side. Will try to fix this

  
  
Posted 2 years ago

@<1526734383564722176:profile|BoredBat47> Yeah. This is an example:

 s3 {
            key: "mykey"
            secret: "mysecret"
            region: "us-east-1"
            credentials: [
                 {
                     bucket: "
"
                     key: "mykey"
                     secret: "mysecret"
                    region: "us-east-1"
                  },
            ]
}
# some other config
default_output_uri: "
"
  
  
Posted 2 years ago

Hi again, @<1526734383564722176:profile|BoredBat47> ! I actually took a closer look at this. The config file should look like this:

        s3 {
            key: "KEY"
            secret: "SECRET"
            use_credentials_chain: false

            credentials: [
                {
                    host: "myendpoint:443"  # no http(s):// and no s3:// prefix, also no bucket name
                    key: "KEY"
                    secret: "SECRET"
                    secure: true  # if https
                },
            ]
        }
        default_output_uri: "
"  # notice the s3:// prefix (not http(s))

The region should be optional, but try setting it as well if it doesn't work

  
  
Posted 2 years ago

And I believe that by default we send artifacts to the clearml server if not specified

  
  
Posted 2 years ago

@<1523701087100473344:profile|SuccessfulKoala55> I figured where to find a region but we don't have an AWS dashboard. We have a custom S3 solution for our own enterprise servers like many companies do, data is not stored on amazon servers. That is why we have and endpoint which is an URL starting with http:// If I would connect to our bucket via boto3 I would pass endpoint to a client session with endpoint_url

  
  
Posted 2 years ago

SmugDolphin23

  
  
Posted 2 years ago

@<1523701435869433856:profile|SmugDolphin23> I didn't use a region at first and that was not working. Now I use a region and it still doesn't work.
From the boto3 inside a Python I could create a session where I specify ak and sk, and create a client from the session where I pass service_name and endpoint_url. It works just fine

  
  
Posted 2 years ago

@<1526734383564722176:profile|BoredBat47> the bucket name in your case should just be somebucket (and should not start with s3:// )

  
  
Posted 2 years ago

Oh, it's configured o agent machine, got you

  
  
Posted 2 years ago

check the output_uri parameter in Task.init

  
  
Posted 2 years ago
177K Views
41 Answers
2 years ago
2 years ago
Tags
Similar posts