Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Hi All, Can I Synchronize All My Artefacts On Clearml Server With S3? I'M Trying To Deploy Clearml-Server In A Limited Disk Space Environment

Hi all, can I synchronize all my artefacts on clearml server with s3? I'm trying to deploy clearml-server in a limited disk space environment

Posted 2 months ago
Votes Newest

Answers 7

Either that or have a shared mount between the machines

Posted 2 months ago

It's worth a try 🙂

Posted 2 months ago

I'm thinking of using s3fs on the entire /opt/clearml/data folder. What do you think?

Posted 2 months ago

That makes sense, but that would mean that each client/user has to manage the upload themselves, right?

(I'm trying to use clearml to create an abstraction over the compute / cloud)

Posted 2 months ago

Hi @<1535069219354316800:profile|PerplexedRaccoon19> , I'm not sure I understand what you mean. Can you elaborate on the use case?

Posted 2 months ago

So I am deploying clearml-server on an on-prem server, and the checkpoints etc. are quite large for the experiments I will do.

Instead I want to periodically upload / back up this data to s3, and free up local disk space. Is that something that is supported?

I see that in my docker-compose installation, most of the big files are in /opt/clearml/data

Posted 2 months ago

I think you can periodically upload them to s3, I think the StorageManager would help with that. Do consider that artifacts are logged in the system with links (each artifact is a link in the end) So even if you upload it to and s3 bucket in the backend there will be a link leading to the file-server so you would have to amend this somehow.

Why not upload specific checkpoints directly to s3 if they're extra heavy?

Posted 2 months ago
7 Answers
2 months ago
2 months ago