Dockerfile source for elasticsearch docker image.
This source repo was originally copied from: https://github.com/docker-library/elasticsearch
This is not an official Google product.
This image contains an installation Elasticsearch
For more information, see the Official Image Marketplace Page.
Pull command (first install gcloud):
gcloud auth configure-docker && docker -- pull marketplace.gcr.io/google/elasticsearch7
Dockerfile for this image can be found here
Consult Marketplace container documentation for additional information about setting up your Kubernetes environment.
Copy the following content to pod.yaml
file, and run kubectl create -f pod.yaml
.
apiVersion: v1
kind: Pod
metadata:
name: some-elasticsearch
labels:
name: some-elasticsearch
spec:
containers:
- image: marketplace.gcr.io/google/elasticsearch7
name: elasticsearch
Run the following to expose the port. Depending on your cluster setup, this might expose your service to the Internet with an external IP address. For more information, consult Kubernetes documentation.
kubectl expose pod some-elasticsearch --name some-elasticsearch-9200 \
--type LoadBalancer --port 9200 --protocol TCP
Elasticsearch host requires configured enviroment. On Linux host please run sysctl -w vm.max_map_count=262144
. For details please check official documentation.
To retain Elasticsearch data across container restarts, see Use a persistent data volume.
To configure your application, see Configurations.
To retain Elasticsearch data across container restarts, we should use a persistent volume for /usr/share/elasticsearch/data
.
Copy the following content to pod.yaml
file, and run kubectl create -f pod.yaml
.
apiVersion: v1
kind: Pod
metadata:
name: some-elasticsearch
labels:
name: some-elasticsearch
spec:
containers:
- image: marketplace.gcr.io/google/elasticsearch7
name: elasticsearch
volumeMounts:
- name: elasticsearchdata
mountPath: /usr/share/elasticsearch/data
volumes:
- name: elasticsearchdata
persistentVolumeClaim:
claimName: elasticsearchdata
---
# Request a persistent volume from the cluster using a Persistent Volume Claim.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: elasticsearchdata
annotations:
volume.alpha.kubernetes.io/storage-class: default
spec:
accessModes: [ReadWriteOnce]
resources:
requests:
storage: 5Gi
Run the following to expose the port. Depending on your cluster setup, this might expose your service to the Internet with an external IP address. For more information, consult Kubernetes documentation.
kubectl expose pod some-elasticsearch --name some-elasticsearch-9200 \
--type LoadBalancer --port 9200 --protocol TCP
Attach to the container.
kubectl exec -it some-elasticsearch -- bash
The following examples use curl
. First we need to install it as it is not installed by default.
apt-get update && apt-get install -y curl
We can get test data into Elasticsearch using a HTTP PUT request. This will populate Elasticsearch with test data.
curl -H "Content-Type: application/json" -XPUT http://localhost:9200/estest/test/1 -d \
'{
"name" : "Elasticsearch Test",
"Description": "This is just a test"
}'
We can try searching for our test data using curl
.
curl http://localhost:9200/estest/_search?q=Test
Assume /path/to/your/elasticsearch.yml
is the configuration file on your localhost. We can mount this as volume at /usr/share/elasticsearch/config/elasticsearch.yml
on the container for Elasticsearch to read from.
Create the following configmap
:
kubectl create configmap elasticsearchconfig \
--from-file=/path/to/your/elasticsearch.yml
Copy the following content to pod.yaml
file, and run kubectl create -f pod.yaml
.
apiVersion: v1
kind: Pod
metadata:
name: some-elasticsearch
labels:
name: some-elasticsearch
spec:
containers:
- image: marketplace.gcr.io/google/elasticsearch7
name: elasticsearch
volumeMounts:
- name: elasticsearchconfig
mountPath: /usr/share/elasticsearch/config
volumes:
- name: elasticsearchconfig
configMap:
name: elasticsearchconfig
Run the following to expose the port. Depending on your cluster setup, this might expose your service to the Internet with an external IP address. For more information, consult Kubernetes documentation.
kubectl expose pod some-elasticsearch --name some-elasticsearch-9200 \
--type LoadBalancer --port 9200 --protocol TCP
See Elasticsearch documentation on available configuration options.
Also see Volume reference.
Consult Marketplace container documentation for additional information about setting up your Docker environment.
Use the following content for the docker-compose.yml
file, then run docker-compose up
.
version: '2'
services:
elasticsearch:
container_name: some-elasticsearch
image: marketplace.gcr.io/google/elasticsearch7
ports:
- '9200:9200'
Or you can use docker run
directly:
docker run \
--name some-elasticsearch \
-p 9200:9200 \
-d \
marketplace.gcr.io/google/elasticsearch7
Elasticsearch host requires configured enviroment. On Linux host please run sysctl -w vm.max_map_count=262144
. For details please check official documentation.
To retain Elasticsearch data across container restarts, see Use a persistent data volume.
To configure your application, see Configurations.
To retain Elasticsearch data across container restarts, we should use a persistent volume for /usr/share/elasticsearch/data
.
Assume /path/to/your/elasticsearch/data
is a persistent data folder on your host.
Use the following content for the docker-compose.yml
file, then run docker-compose up
.
version: '2'
services:
elasticsearch:
container_name: some-elasticsearch
image: marketplace.gcr.io/google/elasticsearch7
ports:
- '9200:9200'
volumes:
- /path/to/your/elasticsearch/data:/usr/share/elasticsearch/data
Or you can use docker run
directly:
docker run \
--name some-elasticsearch \
-p 9200:9200 \
-v /path/to/your/elasticsearch/data:/usr/share/elasticsearch/data \
-d \
marketplace.gcr.io/google/elasticsearch6
Attach to the container.
docker exec -it some-elasticsearch bash
The following examples use curl
. First we need to install it as it is not installed by default.
apt-get update && apt-get install -y curl
We can get test data into Elasticsearch using a HTTP PUT request. This will populate Elasticsearch with test data.
curl -H "Content-Type: application/json" -XPUT http://localhost:9200/estest/test/1 -d \
'{
"name" : "Elasticsearch Test",
"Description": "This is just a test"
}'
We can try searching for our test data using curl
.
curl http://localhost:9200/estest/_search?q=Test
Assume /path/to/your/elasticsearch.yml
is the configuration file on your localhost. We can mount this as volume at /usr/share/elasticsearch/config/elasticsearch.yml
on the container for Elasticsearch to read from.
Use the following content for the docker-compose.yml
file, then run docker-compose up
.
version: '2'
services:
elasticsearch:
container_name: some-elasticsearch
image: marketplace.gcr.io/google/elasticsearch7
ports:
- '9200:9200'
volumes:
- /path/to/your/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
Or you can use docker run
directly:
docker run \
--name some-elasticsearch \
-p 9200:9200 \
-v /path/to/your/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
-d \
marketplace.gcr.io/google/elasticsearch7
See Elasticsearch documentation on available configuration options.
Also see Volume reference.
In the following guide you will learn how to create simple two-node Elasticsearch cluster. This is only an example how to configure and link together containers, not a production ready configuration. For production ready configuration please refer to official documentation.
We will need a master node, that will also serve as a gateway to our cluster. Single agent node will be attached to the master node.
Use the following content for the docker-compose.yml
file, then run docker-compose up
.
version: '2'
services:
elasticsearch-master:
container_name: some-elasticsearch-master
image: marketplace.gcr.io/google/elasticsearch7
ports:
- '9200:9200'
command:
- '-Enetwork.host=0.0.0.0'
- '-Etransport.tcp.port=9300'
- '-Ehttp.port=9200'
elasticsearch-agent:
container_name: some-elasticsearch-agent
image: marketplace.gcr.io/google/elasticsearch7
command:
- '-Enetwork.host=0.0.0.0'
- '-Etransport.tcp.port=9300'
- '-Ehttp.port=9200'
- '-Ediscovery.zen.ping.unicast.hosts=some-elasticsearch-master'
depends_on:
- elasticsearch-master
Or you can use docker run
directly:
# elasticsearch-master
docker run \
--name some-elasticsearch-master \
-p 9200:9200 \
-d \
marketplace.gcr.io/google/elasticsearch7 \
-Enetwork.host=0.0.0.0 \
-Etransport.tcp.port=9300 \
-Ehttp.port=9200
# elasticsearch-agent
docker run \
--name some-elasticsearch-agent \
--link some-elasticsearch-master \
-d \
marketplace.gcr.io/google/elasticsearch7 \
-Enetwork.host=0.0.0.0 \
-Etransport.tcp.port=9300 \
-Ehttp.port=9200 \
-Ediscovery.zen.ping.unicast.hosts=some-elasticsearch-master
After few seconds, we can check that cluster is running invoking http://localhost:9200/_cluster/health
.
These are the ports exposed by the container image.
Port | Description |
---|---|
TCP 9114 | Prometheus exporter port |
TCP 9200 | Elasticsearch HTTP port. |
TCP 9300 | Elasticsearch default communication port. |
These are the filesystem paths used by the container image.
Path | Description |
---|---|
/usr/share/elasticsearch/data | Stores Elasticsearch data. |
/usr/share/elasticsearch/config/elasticsearch.yml | Stores configurations. |
/usr/share/elasticsearch/config/log4j2.properties | Stores logging configurations. |