Question: Off heap native memory usage (Lucene) by elasticsearch running in kubernetes
Misterhex opened this issue · 2 comments
As far as i understand, lucene will use up as much as memory as it can from the operating system, which is referred to as off-heap native memory.
https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html
https://discuss.elastic.co/t/understanding-off-heap-usage/97176
https://stackoverflow.com/a/35232221
Based on this understanding, does it means we have to run the elasticsearch pods in dedicated kubernetes nodes? Since the es pods would just keep crawling as much memory as it can, potentially causing memory and disk pressure on the kubelet, and causing other pods running on the same node to be evicted?
For example, if we have kubelet with 64 GB of memory, and for our elasticsearch pods, we set resources request and limit to 8 GB, and ES_HEAP_SIZE to be 3GB. Would lucene use up all remaining 60GB, or it would be using the remaining 5 GB based on the cgroup limit?
Thanks!
IIRC Java heap limits should be enough. If you don't trust those, you can define pod resource limits and Kubernetes will kill the pod if it goes above the limits.
I have same question, about whether or not swaps are disabled be default. Any idea how to verify that java heap limits are taking care of this?