Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
I Configured S3 Storage In My Clearml.Conf File On A Worker Machine. Then I Run Experiment Which Produced A Small Artifact And It Doesn'T Appear In My Cloud Storage. What Am I Doing Wrong? How To Make Artifacts Appear On My S3 Storage? Below Is A Sample O

I configured S3 storage in my clearml.conf file on a worker machine. Then I run experiment which produced a small artifact and it doesn't appear in my cloud storage. What am I doing wrong? How to make artifacts appear on my S3 storage?
Below is a sample of clearml.conf with S3 configuration.
s3 { key: "mykey" secret: "mysecret" region: "myendpoint" credentials: [ specifies key/secret credentials to use when handling s3 urls (read or write) { bucket: "mybucket" key: "mykey" secret: "mysecret" }, ] }

  
  
Posted one year ago
Votes Newest

Answers 41


@<1523701435869433856:profile|SmugDolphin23> I actually don't know where to get my region for the creds to S3 I am using. From what I figured, I have to plug in my sk, ak and bucket into credentials in agent and output URI must be my S3 endpoint — complete URI with protocol. Is it correct?

  
  
Posted one year ago

Hi again, @<1526734383564722176:profile|BoredBat47> ! I actually took a closer look at this. The config file should look like this:

        s3 {
            key: "KEY"
            secret: "SECRET"
            use_credentials_chain: false

            credentials: [
                {
                    host: "myendpoint:443"  # no http(s):// and no s3:// prefix, also no bucket name
                    key: "KEY"
                    secret: "SECRET"
                    secure: true  # if https
                },
            ]
        }
        default_output_uri: "
"  # notice the s3:// prefix (not http(s))

The region should be optional, but try setting it as well if it doesn't work

  
  
Posted one year ago

` s3 {
# S3 credentials, used for read/write access by various SDK elements

        # default, used for any bucket not specified below
        key: "mykey"
        secret: "mysecret"
        region: " ` ` "

        credentials: [

             {
                 bucket: "mybucket"
                 key: "mykey"
                 secret: "mysecret"
                 region: " ` ` "
              }, `
  
  
Posted one year ago

@<1523701304709353472:profile|OddShrimp85> I fixed my SSL error by putting REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt in .bashrc file

  
  
Posted one year ago

@<1523701087100473344:profile|SuccessfulKoala55> I figured where to find a region but we don't have an AWS dashboard. We have a custom S3 solution for our own enterprise servers like many companies do, data is not stored on amazon servers. That is why we have and endpoint which is an URL starting with http:// If I would connect to our bucket via boto3 I would pass endpoint to a client session with endpoint_url

  
  
Posted one year ago

I think that will work, but I'm not sure actually. I know for sure that something like us-east-2 is supported

  
  
Posted one year ago

Could you try adding region under credentials as well?

  
  
Posted one year ago

I meant the code where you upload an artifact, sorry

  
  
Posted one year ago

SmugDolphin23

  
  
Posted one year ago

check the output_uri parameter in Task.init

  
  
Posted one year ago

SmugDolphin23 I added a region, run experiment again. Didn't work

  
  
Posted one year ago
23K Views
41 Answers
one year ago
one year ago
Tags
Similar posts