Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi Clearml Team! I'M Trying To Get S3 Storage Set Up For My Self-Hosted Server Using Our Enterprise Storage-Grid Back-End And Getting A Weird Error Response When Trying To Upload An Image (This Is The First Test Of S3 Storage For Me So Nothing Is Working

Hi ClearML team! I'm trying to get S3 storage set up for my self-hosted server using our Enterprise Storage-Grid back-end and getting a weird error response when trying to upload an image (this is the first test of s3 storage for me so nothing is working to date):
2022-07-21 13:09:36,279 - clearml.storage - ERROR - Exception encountered while uploading Failed uploading object /DeepSatcom/DeepSig simple NN.4d4cf090666b4bfdbfec71278c6f3bce/metrics/Data Samples/train/Data Samples_train_00000000.jpeg (403): <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AccessDenied</Code><Message>AWS authentication requires a valid Date or x-amz-date header</Message><Resource>/csaps-dev</Resource><RequestId>1658433979618029</RequestId></Error>I'm also a bit confused about what, if any, S3 config needs to happen on the server side as I only configured the client. Maybe this is just an incompatibility with Storage-Grid and ClearML? I'm going to set up a Minio bucket to test that theory. I'm wondering if it has to do with the server docker containers being on UTC time.

Note that I can attached to the clearml-fileserver container and create a boto3 client inside python3 and create folders on the S3 server. I'm guessing this has to do with something that's missing in my config for the ClearML Server setup.

Their docs here: https://docs.netapp.com/us-en/storagegrid-116/pdfs/sidebar/S3_REST_API_supported_operations_and_limitations.pdf
include:
The StorageGRID system only supports valid HTTP date formats for any headers that accept date values. The time portion of the date can be specified in Greenwich Mean Time (GMT) format, or in Universal Coordinated Time (UTC) format with no time zone offset (+0000 must be specified). If you include the x-amz-date header in your request, it overrides any value specified in the Date request header. When using AWS Signature Version 4, the x-amz-date header must be present in the signed request because the date header is not supported.

  
  
Posted 2 years ago
Votes Newest

Answers 30


Oh, also secure needs to be true

  
  
Posted 2 years ago

Hmm...

  
  
Posted 2 years ago

I think you shouldn't have the scheme in the host definition
credentials: [ { # This will apply to all buckets in this host (unless key/value is specifically provided for a given bucket) host: "STORAGE_GRID_URL:443" key: "REMOVED" secret: "REMOVED" multipart: false secure: true region: "us-east-1"

  
  
Posted 2 years ago

Is that because I didn't list the bucket name in the clearml.conf?

  
  
Posted 2 years ago

Wait - adding the output_uri seems to work.

  
  
Posted 2 years ago

you can add output_uri= https://STORAGE_GRID_URL:443/ , but I don't think that's it

  
  
Posted 2 years ago

Also - I'm not specifying the URI when I create the Task

  
  
Posted 2 years ago

No worries. I probably should have revisited the examples. Too much cutting/pasting on my part. Thanks so much for helping!

  
  
Posted 2 years ago

Now to get clearml-data to use S3... 🙂

  
  
Posted 2 years ago

OK - I can try to hack that

  
  
Posted 2 years ago

Oh, right, that's totally my bad, sorry - too late 😞

  
  
Posted 2 years ago

I do see, looking at the code, that we're not passing the region_name to boto, maybe that's it

  
  
Posted 2 years ago

Ok - it's the URL in the files_server that was wrong. It needs to be s3 and not https.

  
  
Posted 2 years ago

You mean no https?

  
  
Posted 2 years ago

Sure

  
  
Posted 2 years ago

yeah

  
  
Posted 2 years ago

Well, the nice thing is, if the SDK works with it, clearml-data basically uses the SDK, so... 😄

  
  
Posted 2 years ago

Nope - bucket_name in clearml.conf didn't work. Maybe default_uri somewhere?

  
  
Posted 2 years ago

Kudos on finding it! 🙂

  
  
Posted 2 years ago

output_uri= `

  
  
Posted 2 years ago

so you should have something like:
credentials: [ { # This will apply to all buckets in this host (unless key/value is specifically provided for a given bucket) host: " " key: "REMOVED" secret: "REMOVED" multipart: false secure: true region: "us-east-1"

  
  
Posted 2 years ago

boto3 w/o region still worked

  
  
Posted 2 years ago

yup - this is handled by the secure: true part

  
  
Posted 2 years ago

Just to make sure that's the source of the error

  
  
Posted 2 years ago

Great! Now to tell our IT that I need more space on S3 🙂

  
  
Posted 2 years ago

nope

  
  
Posted 2 years ago

wait

  
  
Posted 2 years ago

Wait, can you try calling boto3 like you did without the bucket?

  
  
Posted 2 years ago

Same response. Should I change that in the fileserver section too?

  
  
Posted 2 years ago

I added secure and region - didn't change the behavior.

  
  
Posted 2 years ago