Thanks SuccessfulKoala55 !
Maybe you could add to your docker-compose file an option for limiting the size of the logs, since there is no limit by default, their size will grow for ever, which doesn't sound ideal https://docs.docker.com/compose/compose-file/#logging
Also in this docker-compose I removed the external binding of the ports for mongo/redis/es
Yes, latest docker-compose was already updated with this change 🙂
I think we'll add this to the documentation, as an optional instruction
This can either be set for every service in the docker-compose file, or globally on the machine in /etc/docker/daemon.json
. BTW this is not mentioned anywhere, but I assume docker rotates these logs, otherwise there's not much use setting this 🙂
Sure yes! As you can see I just added the blocklogging: driver: "json-file" options: max-size: "200k" max-file: "10"
To all services. Also in this docker-compose I removed the external binding of the ports for mongo/redis/es
Guys the experiments I had running didn't fail, they just waited and reconnected, this is crazy cool
JitteryCoyote63 can you share the docker-compose file so we can make sure the documentation follows it?
Relevant SO issue
Yup, one of the things I meant 🙂
Hi JitteryCoyote63 ,
Anything related to the /var/lib/docker
is actually out-of-scope for the Trains Server - we just use docker and docker-compose and cleaning up after docker is something that I'm sure can be further researched (we'll also appreciate any input on the subject) - I assume storage is mostly related to cached containers and console output being stored by docker-compose.
As for the ElasticSearch data (which is the main culprit from your list) - this is simply experiments data and trains agents reports being stored, and some of data can easily be cleared. Note that 30GB isn't that big, and that your experiments generate quite a bit of data. To see a breakdown of the data stored, you can simply query the ES for the list of all indices, sorted by storage size using http://<trains-hostname-or-ip>:9200/_cat/indices?v=true&s=store.size
(of course, that's assuming port 9200 is open for external access. If not, just do it from the machine using curl
http://localhost:9200/_cat/indices?v=true&s=store.size )
Stopping the server Editing the docker-compose.yml file, adding the logging section to all services Restarting the serverDocker-compose freed 10Go of logs
They indeed do auto-rotate when you limit the size of the logs