Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi All, Can I Synchronize All My Artefacts On Clearml Server With S3? I'M Trying To Deploy Clearml-Server In A Limited Disk Space Environment

Hi all, can I synchronize all my artefacts on clearml server with s3? I'm trying to deploy clearml-server in a limited disk space environment

  
  
Posted one year ago
Votes Newest

Answers 7


That makes sense, but that would mean that each client/user has to manage the upload themselves, right?

(I'm trying to use clearml to create an abstraction over the compute / cloud)

  
  
Posted one year ago

It's worth a try 🙂

  
  
Posted one year ago

I think you can periodically upload them to s3, I think the StorageManager would help with that. Do consider that artifacts are logged in the system with links (each artifact is a link in the end) So even if you upload it to and s3 bucket in the backend there will be a link leading to the file-server so you would have to amend this somehow.

Why not upload specific checkpoints directly to s3 if they're extra heavy?

  
  
Posted one year ago

Hi @<1535069219354316800:profile|PerplexedRaccoon19> , I'm not sure I understand what you mean. Can you elaborate on the use case?

  
  
Posted one year ago

I'm thinking of using s3fs on the entire /opt/clearml/data folder. What do you think?

  
  
Posted one year ago

Either that or have a shared mount between the machines

  
  
Posted one year ago

So I am deploying clearml-server on an on-prem server, and the checkpoints etc. are quite large for the experiments I will do.

Instead I want to periodically upload / back up this data to s3, and free up local disk space. Is that something that is supported?

I see that in my docker-compose installation, most of the big files are in /opt/clearml/data

  
  
Posted one year ago
1K Views
7 Answers
one year ago
one year ago
Tags