Containerzied setup of Elasticsearch and Kibana that
- runs only a single node,
- follows recommendations for a production-ready cluster, and
- keeps configuration straightforward and well-documented.
At least to the degree that this is possible.
Of course the above comes with with a huge caveat: running only a single node means no load balancing, no redudancy and no data replication. Thus you should only use this if in your use case hardware/software failures are not critical; that is where downtime and data loss is tolerable.
What then is offered in terms of being ready for production?
- Runs exactly the Docker images released by Elastic.
- Set system and container settings as recommended by the Elasticsearch reference.
- Automatically generates self-signed X.509 certificates during setup and encrypts all communication via TLS of both Elasticsearch and Kibana.
- Uses auto-generated passwords for builtin users and does not store them in plaintext accessible from inside a container.
A Unix environment with Make (currently only tested on Fedora) and the following installed:
Optional dependencies:
-
Append the following to
/etc/sysctl.conf(if it exists) or creating a new file/etc/sysctl.d/elasticsearch.confto minimize swapping and increase number of possible mapped memory areas.vm.swappiness=1 # https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration-memory.html vm.max_map_count=262144 # https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html#vm-max-map-countTo apply these settings run
sudo sysctl -por restart your system. -
Increase you system ulimits so that arbitrary much memory can be locked, at least 65535 files can be opened per process, and at least 4096 processes can be started per user.
To configure this on a Fedora system, append the following to
/etc/security/limits.confand restart your machine afterwards:* - memlock -1 * - nofile 65535 #* - nproc 4096(The last line is commented out, because by default the Fedora limit is higher than 4096, which is fine.)
-
Open
.envand adjust the settings to your preference.The most important setting is
STACK_DIRwhich is the path to which all data and configuration will be written. By default, all data will be written to a subdirectory of this repository. (Subdirectories of theSTACK_DIRare bind-mounted to the containers that use them.) -
Edit
instances.ymland add DNS names and IP addresses under which you will want to access your Elasticsearch and Kibana instances (these will be written into the generated X.509 certificates and can not easily be changed later). -
Review the Elasticsearch configuration in
config-elasticsearch/.Note that the contents of this directory are only used to bootstrap the Elasticsearch configuration, the configuration of the installed cluster will reside in
${STACK_DIR}/config-elasticsearch/.You should probably atleast adjust the name of your cluster by changing
cluster.nameinelasticsearch.ymland the heap size by changing-Xmsand-Xmxinjvm.optionsto your needs. -
Review the Kibana configuration in
config-kibana/.Note that the contents of this directory are only used to bootstrap the Kibana configuration, the configuration of the installed cluster will reside in
${STACK_DIR}/config-kibana/.You don't need to change anything from the defaults.
Makefile-targets are used to operate the cluster.
See make help for descriptions:
usage: make <target>
Targets:
help Show this help message.
start Start the cluster (perform any setup if necessary).
stop Stop the cluster.
clean Remove all created files (this deletes all your data!).
logs-elasticsearch Print message of JSON-logs of Elasticsearch.
logs-kibana Print message of JSON-logs of Kibana.
health Check health status of cluster.
curl Send TLS-encrypted curl-requests cluster.
-
How do I perform setup and start the cluster?
Run
make start. Of course the services need a few seconds until everything is available. The first time this is executed all necessary setup will automatically be executed. To stop the running cluster usemake stop. -
How can I access Kibana?
Via https://localhost:5601/ for which you substitute the IP or DNS name of the server you started the cluster on and the port your configured in
.env. On first access your browser will most likely warn you about an potentially unsecure connection. This is unavoidable since self-signed certificates are used, just click away the warning. To log in use the userelasticand the generated password from${STACK_DIR}/passwords/elastic. -
How can I create new user accounts?
To use the default-configured
nativeElasticsearch authentication, just log into Kibana and follow the guides to create users and assign roles.The use of other authentication methods (LDAP, Active Directory, PKI, etc.) must be manually configured in the Elasticsearch configuration.
-
How do I send
curl-requests?Because TLS-encryption and password-use is required, simple requests like
curl localhost:9200/_cat/healthwill not recieve a reply.Instead you can use the
make curlhelper, like so:make curl URL=_cat/health
Alternative, because this exact command is often used, it is also available as
make health. -
How do I adjust my Elasticsearch/Kibana configuration?
Stop a potentially running cluster via
make stop, adjust any configuration as desired in the${STACK_DIR}/config-*directories, and restart the cluster viamake start. -
How do I upgrade to a new Elasticsearch/Kibana version?
Stop a potentially running cluster via
make stop, preferrably backup your${STACK_DIR}directory, adjust theTAGentry in.envto the new desired version, and restart your cluster viamake start. -
Where can I find the generated passwords for the built-in users?
These are stored in
${STACK_DIR}/passwords. -
Where can I find the generated X.509 certificates?
These are stored in
${STACK_DIR}/certs. Specifically, the certificate of the certificate authority (CA) is stored in${STACK_DIR}/certs/ca/ca.crt. -
How can I connect to the cluster from Python via elasticsearch-py?
An example Python script can be found in samples/python-connect/.
-
How can I debug problems with the cluster?
Elasticsearch and Kibana log message are available via
podman-compose logs elasticsearchandpodman-compose logs kibana, respectively. For example:{"type": "server", "timestamp": "2020-02-26T10:37:21,752Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-elasticstack", "node.name": "elasticsearch", "message": "node name [elasticsearch], node ID [gZ9sFqHGTyujlHoVXfDmsA], cluster name [docker-elasticstack]" }As these JSON message can be quite unreadable you can use the helpers
make logs-elasticsearchandmake logs-kibanato just view themessagepart of the logs. For example:2020-02-26T10:37:21,752Z | INFO | node name [elasticsearch], node ID [gZ9sFqHGTyujlHoVXfDmsA], cluster name [docker-elasticstack]
Please feel free to submit bug reports and pull requests!
Copyright 2019-2021 Lukas Schmelzeisen. Licensed under the Apache License, Version 2.0.