Hi Martin, thanks for the swift response.
Yes, the artifacts, as backing up the full database would not resolve the question of capacity. Unless I’m missing something
Additionally -
Are there any clever functionality for dumping experiment data to external storage to avoid filling up the server?
Hi TrickyRaccoon92
... would any running experiment keep a cache of to-be-sent-data, fail the experiment, or continue the run, skipping the recordings until the server is back up?
Basically they will keep trying to send data to server until it is up again (you should not loose any of the logs)
Are there any clever functionality for dumping experiment data to external storage to avoid filling up the server?
You mean artifacts or the database ?
Hmm TrickyRaccoon92 take a look at the cleanup service, I think you can hack it so instead of deleting the artifacts, it will archive them somewhere (also you can change the filter, maybe only perform on experiments with specific user tag)
What do you think?
https://github.com/allegroai/trains/blob/master/examples/services/cleanup/cleanup_service.py
I guess I could do a backup of the DB and flush the data, but what I’m looking for is more of a “Select X experiments -> Send to blob storage” to free up space.