SuccessfulKoala55 thanks for your help as always. I will try to create a DAG on airflow using the SDK to implement some form of retention policy which removes things that are not necessary. We independently store metadata on artefacts we produce, and mostly use clearml as the experiment manager, so a lot of the events data can be cleared.
Hi TenseOstrich47 ,
In CelarML Server ES does not contain management-critical data, but only raw (indexed) data, such as experiment metrics (plots, scalars, logs, debug image references) and performance statistics (queue usage statistics, workers metrics etc.).
Loosing ES data should not destabilize the server, but simply lose some historical data (not that this is a good thing 😕 ).
Since ES does not really provide any retention policy mechanisms, you can implement maintenance scripts yourself, to handle various aspects of data collection.
In general, indices used for queue metrics and workers stats can be safely deleted (they are usually rotated every month so you can probably always delete last-month's indices).
Task data (plots, scalars, logs and debug image references) is not rotated, and as such the only "nice" way of managing retention is deleting old or unwanted tasks (or resetting them, which will essentially clean all indexed data) - you can do that using a cron job that can query the server using the SDK, the Python APIClient or simply using the REST API