elasticsearch:5.0.0 max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144]
czerasz opened this issue ยท 22 comments
I get this message when I try to run the elasticsearch:5.0.0
image:
$ docker run -it --rm --name=es-test elasticsearch:5.0.0
[2016-07-15 09:07:35,914][WARN ][bootstrap ] unable to install syscall filter:
java.lang.UnsupportedOperationException: seccomp unavailable: your kernel is buggy and you should upgrade
at org.elasticsearch.bootstrap.Seccomp.linuxImpl(Seccomp.java:279)
at org.elasticsearch.bootstrap.Seccomp.init(Seccomp.java:616)
at org.elasticsearch.bootstrap.JNANatives.trySeccomp(JNANatives.java:215)
at org.elasticsearch.bootstrap.Natives.trySeccomp(Natives.java:99)
at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:94)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:147)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:250)
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:96)
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:91)
at org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:91)
at org.elasticsearch.cli.Command.main(Command.java:53)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:70)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:63)
[2016-07-15 09:07:36,040][INFO ][node ] [Count Abyss] version[5.0.0-alpha4], pid[1], build[3f5b994/2016-06-27T16:23:46.861Z], OS[Linux/4.4.0-28-generic/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_91/25.91-b14]
[2016-07-15 09:07:36,040][INFO ][node ] [Count Abyss] initializing ...
[2016-07-15 09:07:36,773][INFO ][plugins ] [Count Abyss] modules [percolator, lang-mustache, lang-painless, reindex, aggs-matrix-stats, lang-expression, ingest-common, lang-groovy], plugins []
[2016-07-15 09:07:37,296][INFO ][env ] [Count Abyss] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/mapper/ubuntu--vg-root)]], net usable_space [344.8gb], net total_space [446gb], spins? [possibly], types [ext4]
[2016-07-15 09:07:37,297][INFO ][env ] [Count Abyss] heap size [1.9gb], compressed ordinary object pointers [true]
[2016-07-15 09:07:38,437][INFO ][node ] [Count Abyss] initialized
[2016-07-15 09:07:38,437][INFO ][node ] [Count Abyss] starting ...
[2016-07-15 09:07:38,521][INFO ][transport ] [Count Abyss] publish_address {172.17.0.2:9300}, bound_addresses {[::]:9300}
Exception in thread "main" java.lang.RuntimeException: bootstrap checks failed
initial heap size [268435456] not equal to maximum heap size [2147483648]; this can cause resize pauses and prevents mlockall from locking the entire heap
max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144]
at org.elasticsearch.bootstrap.BootstrapCheck.check(BootstrapCheck.java:125)
at org.elasticsearch.bootstrap.BootstrapCheck.check(BootstrapCheck.java:85)
at org.elasticsearch.bootstrap.BootstrapCheck.check(BootstrapCheck.java:65)
at org.elasticsearch.bootstrap.Bootstrap$5.validateNodeBeforeAcceptingRequests(Bootstrap.java:178)
at org.elasticsearch.node.Node.start(Node.java:373)
at org.elasticsearch.bootstrap.Bootstrap.start(Bootstrap.java:193)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:252)
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:96)
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:91)
at org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:91)
at org.elasticsearch.cli.Command.main(Command.java:53)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:70)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:63)
Refer to the log for complete error details.
[2016-07-15 09:07:38,528][INFO ][node ] [Count Abyss] stopping ...
[2016-07-15 09:07:38,537][INFO ][node ] [Count Abyss] stopped
[2016-07-15 09:07:38,538][INFO ][node ] [Count Abyss] closing ...
[2016-07-15 09:07:38,546][INFO ][node ] [Count Abyss] closed
I also tried this:
$ docker run -it --rm --name=es-test -v $PWD/configuration/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml elasticsearch:5.0.0
[2016-07-15 09:12:37,798][WARN ][bootstrap ] unable to install syscall filter:
java.lang.UnsupportedOperationException: seccomp unavailable: your kernel is buggy and you should upgrade
at org.elasticsearch.bootstrap.Seccomp.linuxImpl(Seccomp.java:279)
at org.elasticsearch.bootstrap.Seccomp.init(Seccomp.java:616)
at org.elasticsearch.bootstrap.JNANatives.trySeccomp(JNANatives.java:215)
at org.elasticsearch.bootstrap.Natives.trySeccomp(Natives.java:99)
at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:94)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:147)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:250)
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:96)
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:91)
at org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:91)
at org.elasticsearch.cli.Command.main(Command.java:53)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:70)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:63)
[2016-07-15 09:12:37,907][INFO ][node ] [winner] version[5.0.0-alpha4], pid[1], build[3f5b994/2016-06-27T16:23:46.861Z], OS[Linux/4.4.0-28-generic/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_91/25.91-b14]
[2016-07-15 09:12:37,907][INFO ][node ] [winner] initializing ...
[2016-07-15 09:12:38,754][INFO ][plugins ] [winner] modules [percolator, lang-mustache, lang-painless, reindex, aggs-matrix-stats, lang-expression, ingest-common, lang-groovy], plugins []
[2016-07-15 09:12:39,334][INFO ][env ] [winner] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/mapper/ubuntu--vg-root)]], net usable_space [344.8gb], net total_space [446gb], spins? [possibly], types [ext4]
[2016-07-15 09:12:39,334][INFO ][env ] [winner] heap size [1.9gb], compressed ordinary object pointers [true]
[2016-07-15 09:12:40,509][INFO ][node ] [winner] initialized
[2016-07-15 09:12:40,509][INFO ][node ] [winner] starting ...
[2016-07-15 09:12:40,592][INFO ][transport ] [winner] publish_address {172.17.0.2:9300}, bound_addresses {[::]:9300}
Exception in thread "main" java.lang.RuntimeException: bootstrap checks failed
initial heap size [268435456] not equal to maximum heap size [2147483648]; this can cause resize pauses and prevents mlockall from locking the entire heap
max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144]
at org.elasticsearch.bootstrap.BootstrapCheck.check(BootstrapCheck.java:125)
at org.elasticsearch.bootstrap.BootstrapCheck.check(BootstrapCheck.java:85)
at org.elasticsearch.bootstrap.BootstrapCheck.check(BootstrapCheck.java:65)
at org.elasticsearch.bootstrap.Bootstrap$5.validateNodeBeforeAcceptingRequests(Bootstrap.java:178)
at org.elasticsearch.node.Node.start(Node.java:373)
at org.elasticsearch.bootstrap.Bootstrap.start(Bootstrap.java:193)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:252)
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:96)
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:91)
at org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:91)
at org.elasticsearch.cli.Command.main(Command.java:53)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:70)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:63)
Refer to the log for complete error details.
[2016-07-15 09:12:40,598][INFO ][node ] [winner] stopping ...
[2016-07-15 09:12:40,608][INFO ][node ] [winner] stopped
[2016-07-15 09:12:40,608][INFO ][node ] [winner] closing ...
[2016-07-15 09:12:40,616][INFO ][node ] [winner] closed
With the following config:
cluster.name: development
node.name: winner
bootstrap.memory_lock: false
network.host: 0.0.0.0
discovery.zen.minimum_master_nodes: 1
Same here.
set max_map_count value (Linux)
sudo sysctl -w vm.max_map_count=262144
From this link The vm_max_map_count kernel setting needs to be set to at least 262144 for production use. Depending on your platform:
Linux
The vm_map_max_count setting should be set permanently in /etc/sysctl.conf:
$ grep vm.max_map_count /etc/sysctl.conf
vm.max_map_count=262144
To apply the setting on a live system type:
sysctl -w vm.max_map_count=262144
Hey Guys,
I'm on mac and I have increased memory to 6GB
and I still get the same error....
any ideas??
@tzookb, did you also set max map count like this comment? And possibly an env var of ES_JAVA_OPTS="-Xms1g -Xmx1g"
?
Thanks @yosifkit
I can add this, to my docker .env file right :
ES_JAVA_OPTS="-Xms1g -Xmx1g"?
but where do I update the above max map?
vm.max_map_count=262144
Thanks!!
Linux 16 Azure
please don't forgive open the port 9200 in the Azure.
and copy this command:
sudo sysctl -w vm.max_map_count=262144
@tzookb you should use the sysctl -w command on the host machine not in the docker container. This will solve the issue. Hope this helps...
set max_map_count value (Linux)
sudo sysctl -w vm.max_map_count=262144
I don't have sudo access on host machine. So How do I solve this in this case?
@kanihal, If you have full Docker access, you have sudo access. ๐
$ sysctl vm.max_map_count
vm.max_map_count = 262144
$ docker run -it --rm --privileged --net=host --pid=host -v /:/host debian:sid chroot /host sysctl -w vm.max_map_count=262145
vm.max_map_count = 262145
$ sysctl vm.max_map_count
vm.max_map_count = 262145
@yosifkit Thanks it worked.
host info:
>> lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux testing (buster)
Release: testing
Codename: buster
sysctl
was not in the PATH, it was actually in /sbin/sysctl
BTW why are we using debian:sid
? could we use any other debain release?
BTW why are we using debian:sid? could we use any other debain release?
No particular reason; you could use whatever base image you want as long as it has chroot
.
$ docker run -it --rm --privileged --net=host --pid=host -v /:/host debian:sid chroot /host /sbin/sysctl -w vm.max_map_count=262145
@yosifkit
Does this have any side effects of breaking drivers on the host system?
I heard from other users who also use the shared system that nvidia-drivers
for GPU stopped working sometime after this because of some kernel mismatch/panic?
Any reason it would happen? Just wondering if it's even possible.
Does this have any side effects of breaking drivers on the host system?
@kanihal, I have no idea. I only know that elasticsearch requires a large enough vm.max_map_count
. This can only be done on the host, thus affecting everything running on the system, and would need to be persisted correctly or will be lost on reboot. If this is incompatible with other stuff running on the same box (no clue on nvidia-drivers
requirements or if this could cause problems there), then you'll need to run it on a different host.
As the image is working fine then I suggest heading to the Docker Community Forums, the Docker Community Slack, or Stack Overflow for further help or debugging.
As a side note, if you are using the 5 series, then you will have #153 and not run into this problem; that is not true for the the 6 series and above (like the config from 6.6.0). I did manage to get it elasticsearch:6.6.0
to work on a host with a lower vm.max_map_count
(this is not a recommended production setup but could work for some limited evaluation and development):
$ sysctl vm.max_map_count
vm.max_map_count = 262000
$ # ie, less than the required 262144
$ # basically this replaces the default cluster ready config
$ # and sets the config in such a way that the bootstrap checks done by elasticsearch are no longer fatal
$ # probably need a volume, not "-it --rm", etcetera
$ docker run -it --rm elasticsearch:6.6.0 sh -c 'echo "http.host: 0.0.0.0" > /usr/share/elasticsearch/config/elasticsearch.yml && exec docker-entrypoint.sh'
....
[2019-02-20T00:53:20,533][WARN ][o.e.b.BootstrapChecks ] [qjR9jM-] max virtual memory areas vm.max_map_count [262000] is too low, increase to at least [262144]
....
It would probably be simpler to create an image:
FROM elasticsearch:6.6.0
# cat local-config.yml
# http.host: 0.0.0.0
COPY local-config.yml /usr/share/elasticsearch/config/elasticsearch.yml
# alternative to COPY + file
RUN echo 'http.host: 0.0.0.0' > /usr/share/elasticsearch/config/elasticsearch.yml
If someone else stumbles over this for a test system you can also disable the bootstrap checks by setting in the elasticsearch.yml
:
discovery.type: single-node
If we are using Docker in Windows Home, we have to do the following:
docker-machine ssh
sudo sysctl -w vm.max_map_count=262144
My docker compose file functional!
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
container_name: es01
environment:
- node.name=es01
- discovery.seed_hosts=es02
- cluster.initial_master_nodes=es01,es02
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
ulimits:
nofile:
soft: 65535
hard: 65535
memlock:
soft: -1
hard: -1
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
container_name: es02
environment:
- node.name=es02
- discovery.seed_hosts=es01
- cluster.initial_master_nodes=es01,es02
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
volumes:
- esdata02:/usr/share/elasticsearch/data
networks:
- esnet
volumes:
esdata01:
driver: local
esdata02:
driver: local
networks:
esnet:```
how do I run this:
sudo sysctl -w vm.max_map_count=262144
right before my docker containers start up (imagine that I reboot an ec2 instance). does docker-compose have a "before" hook?
I'd recommend using a file in /etc/sysctl.d
to set that value persistently.
In the future, these sorts of questions/requests would be more appropriately posted to the Docker Community Forums, the Docker Community Slack, or Stack Overflow.