It seems elastic has allocated a heap of 32GiB, but uses only 4GiB. Where/why have 32GiB been allocated?
Can you share the elastic part of your docker container? Are you using any overrides?
Last time I tried docker compose, elastic take a lot of RAM !!
You need to limit its RAM usage with mem_limit
:
[...]
elasticsearch:
networks:
- backend
container_name: clearml-elastic
mem_limit: 2g
environment:
bootstrap.memory_lock: "true"
cluster.name: clearml
[...]
I think ES use a greedy strategy where it allocate first then use it from there ...
ManiacalLizard2 what happens when ES hits the limit? Does it go OOM, or does the scalars loading just take a long time in the web-ui? And what about tasks putting scalars in the index?
ManiacalLizard2 Do you have an observation/experience as to what happens when ES hits the limit?
And how much memory does ElasticSearch realistically need?
Sorry I missed your message: no I don't know what happen when ES reach its RAM limit. We do self-host in Azure and use ES SaaS. Our cloud engineer manage that part.
My only experience was when I tried to spin up my local server, from docker compose, to test something and it took my PC down because ES eat all my RAM !!
Thank you for getting back!
I have reduced it to a max of 2GB for the container and 1GB for the java heap inside the container. Up to now I haven’t experienced any issues 👍