Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hello, ~3 Months Ago I Created A Trains-Server In A Machine With 30Gb Of Disk Space. Today I Wasn'T Able To Connect To Trains-Server, So I Checked The Server And Found That The Disk Full. I Ran:

Hello, ~3 months ago I created a trains-server in a machine with 30gb of disk space. Today I wasn't able to connect to trains-server, so I checked the server and found that the disk full. I ran:
sudo du -ah / | sort -h -r | head -n 20 and found that all the space was used by trains-server.
Is it normal that it takes so much space for storing experiments? Why are some of the following files so big? Can I safely delete some of these files? In particular the ...-json.log file of size 6.9G inside the 00de8f3aa507... container (trains-apiserver) How can I free some space?33G / 18G /var 16G /var/lib/docker 16G /var/lib 12G /opt/trains/data 12G /opt/trains 12G /opt 11G /opt/trains/data/elastic_7/nodes/0/indices 11G /opt/trains/data/elastic_7/nodes/0 11G /opt/trains/data/elastic_7/nodes 11G /opt/trains/data/elastic_7 9.7G /opt/trains/data/elastic_7/nodes/0/indices/aSvYA5yrT2OFlI8RrFTc-g/0/index 9.7G /opt/trains/data/elastic_7/nodes/0/indices/aSvYA5yrT2OFlI8RrFTc-g/0 9.7G /opt/trains/data/elastic_7/nodes/0/indices/aSvYA5yrT2OFlI8RrFTc-g 8.7G /var/lib/docker/overlay2 6.9G /var/lib/docker/containers/00de8f3aa50732dd8d45ba87dbf32cb18d9359b3ac9d6c4f62c1f13993e14b93/00de8f3aa50732dd8d45ba87dbf32cb18d9359b3ac9d6c4f62c1f13993e14b93-json.log 6.9G /var/lib/docker/containers/00de8f3aa50732dd8d45ba87dbf32cb18d9359b3ac9d6c4f62c1f13993e14b93 6.9G /var/lib/docker/containers 4.8G /var/lib/docker/overlay2/b7aede29cb6b4f9da413bc5068f9ce7f95c1fd71349ccd816e4e9bcca74384e5 2.8G /var/lib/docker/overlay2/b7aede29cb6b4f9da413bc5068f9ce7f95c1fd71349ccd816e4e9bcca74384e5/merged

  
  
Posted 4 years ago
Votes Newest

Answers 16


Thanks SuccessfulKoala55 !
Maybe you could add to your docker-compose file an option for limiting the size of the logs, since there is no limit by default, their size will grow for ever, which doesn't sound ideal https://docs.docker.com/compose/compose-file/#logging

  
  
Posted 4 years ago

Hi JitteryCoyote63 ,
Anything related to the /var/lib/docker is actually out-of-scope for the Trains Server - we just use docker and docker-compose and cleaning up after docker is something that I'm sure can be further researched (we'll also appreciate any input on the subject) - I assume storage is mostly related to cached containers and console output being stored by docker-compose.
As for the ElasticSearch data (which is the main culprit from your list) - this is simply experiments data and trains agents reports being stored, and some of data can easily be cleared. Note that 30GB isn't that big, and that your experiments generate quite a bit of data. To see a breakdown of the data stored, you can simply query the ES for the list of all indices, sorted by storage size using http://<trains-hostname-or-ip>:9200/_cat/indices?v=true&s=store.size (of course, that's assuming port 9200 is open for external access. If not, just do it from the machine using curl http://localhost:9200/_cat/indices?v=true&s=store.size )

  
  
Posted 4 years ago

Guys the experiments I had running didn't fail, they just waited and reconnected, this is crazy cool

  
  
Posted 4 years ago

Relevant SO issue

Yup, one of the things I meant 🙂

  
  
Posted 4 years ago

This can either be set for every service in the docker-compose file, or globally on the machine in /etc/docker/daemon.json . BTW this is not mentioned anywhere, but I assume docker rotates these logs, otherwise there's not much use setting this 🙂

  
  
Posted 4 years ago

JitteryCoyote63 can you share the docker-compose file so we can make sure the documentation follows it?

  
  
Posted 4 years ago

Thanks for your help 🙏

  
  
Posted 4 years ago

As one would expect 🙂

  
  
Posted 4 years ago

I think we'll add this to the documentation, as an optional instruction

  
  
Posted 4 years ago

Stopping the server Editing the docker-compose.yml file, adding the logging section to all services Restarting the serverDocker-compose freed 10Go of logs

  
  
Posted 4 years ago

Ok, after:

  
  
Posted 4 years ago

They indeed do auto-rotate when you limit the size of the logs

  
  
Posted 4 years ago

Also in this docker-compose I removed the external binding of the ports for mongo/redis/es

Yes, latest docker-compose was already updated with this change 🙂

  
  
Posted 4 years ago

and Thanks!

  
  
Posted 4 years ago

Sure yes! As you can see I just added the block
logging: driver: "json-file" options: max-size: "200k" max-file: "10"To all services. Also in this docker-compose I removed the external binding of the ports for mongo/redis/es

  
  
Posted 4 years ago