Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hello Everyone, I Am Using Self Hosted Clearml Server On Ec2 (Clearml Community Amis). This Ec2 Instance Is Attached To S3 With Iam Role. Now If I Create Or Upload Data From Client Side , I Want It To Be Uploaded On S3. There Is A Way Mentioned For Mentio

Hello everyone, I am using self hosted clearml server on ec2 (clearml community AMIs). This ec2 instance is attached to s3 with IAM role. Now if I create or upload data from client side , I want it to be uploaded on s3. There is a way mentioned for mentioniing bucket name and credentials in clearml.conf file on client side, but I have restrictions of aws keys expiring each hour( It cant be changed strictly for security reasons). So isn't there a way to specify s3 bucket on cleaml server config file so If I upload data it goes to cleaml server ec2 to s3.

  
  
Posted one year ago
Votes Newest

Answers 10


Hi @<1624579015031394304:profile|JitterySeal56>

... and credentials in clearml.conf file on client side, but I have restrictions of aws keys expiring each hour

This means that you need to configure IAM role on your client machine, the data never goes through the server it is uploaded directly from the dev machine to the S3 bucket.

You can however just store the data on your clearml-files server ...

  
  
Posted one year ago

I have mounted my s3 bucket at the location /opt/clearml/data/fileserver/ but I can see my data is not being stored in s3 but its storing in ebs. How so?

I'm assuming the mount was not successful
What you should see is a link to the files server inside clearml, and actual files in your S3 bucket

  
  
Posted one year ago

I have mounted my s3 bucket at the location /opt/clearml/data/fileserver/ but I can see my data is not being stored in s3 but its storing in ebs. How so? It should only be stored in s3. Is there mounting issue in my solution?

  
  
Posted one year ago

I can't configure my local machine for IAM Role.
Ok so if we can't send the data to s3 through server, can I mount the s3 bucket as file system on place where dataset is uploaded on server will it solve this problem?

  
  
Posted one year ago

Where can I see this link?

  
  
Posted one year ago

can I mount the s3 bucket as file system on place where

you need to mount it where the file server is storing it's files, correct (notice, not the DBs, just the files server)

  
  
Posted one year ago

Check the links that are generated in the ui when you upload an artifact or model

  
  
Posted one year ago

@<1523701205467926528:profile|AgitatedDove14> I was able to mount but if I am running the experiment and uploading data it is going to s3 as well as ebs. Why so it should only go to s3

  
  
Posted one year ago

data it is going to s3 as well as ebs. Why so it should only go to s3

This sounds odd, if this is mounted then it goes to the S3 (the link will point to the files server, but it will be stored on the mounted drive i.e. S3)
wdyt?

  
  
Posted one year ago

Yes, I resolved it. So basically we need to give docker containers extra privileges. add --privileged for single container and in case of docker-compose.yml and privileged: true.

  
  
Posted one year ago
833 Views
10 Answers
one year ago
one year ago
Tags
aws