Not mainteined use bringg/docker-zk-exhibitor instead
Runs an Exhibitor-managed ZooKeeper instance using S3 or GCS for backups and automatic node discovery.
Available on the Docker Index as simenduev/zookeeper-exhibitor:
docker pull simenduev/zookeeper-exhibitor
This is forked version of mbabineau/zookeeper-exhibitor
The fork add support for the following:
- Google Cloud Storage support (via
gcsfuse
), requires privileged mode, see below - Updated Exhibitor and Zookeeper to their latest versions
- Using official Java container, based on Alpine flavor
- Allow to specify settling-period (
ZK_SETTLING_PERIOD
)
- Exhibitor 1.5.6
- ZooKeeper 3.4.8
The container expects the following environment variables to be passed in:
HOSTNAME
- addressable hostname for this node (Exhibitor will forward users of the UI to this address)GS_BUCKET
- (optional) bucket used by Exhibitor for backups and coordinationGS_PREFIX
- (optional) key prefix withinGS_BUCKET
to use for this clusterS3_BUCKET
- (optional) bucket used by Exhibitor for backups and coordinationS3_PREFIX
- (optional) key prefix withinS3_BUCKET
to use for this clusterAWS_ACCESS_KEY_ID
- (optional) AWS access key ID with read/write permissions onS3_BUCKET
AWS_SECRET_ACCESS_KEY
- (optional) secret key forAWS_ACCESS_KEY_ID
AWS_REGION
- (optional) the AWS region of the S3 bucket (defaults tous-west-2
)ZK_PASSWORD
- (optional) the HTTP Basic Auth password for the "zk" userZK_DATA_DIR
- (optional) Zookeeper data directoryZK_LOG_DIR
- (optional) Zookeeper log directoryZK_SETTLING_PERIOD
- (optional) How long in ms to wait for the Ensemble to settle. Default 2 minutesHTTP_PROXY_HOST
- (optional) HTTP Proxy hostnameHTTP_PROXY_PORT
- (optional) HTTP Proxy portHTTP_PROXY_USERNAME
- (optional) HTTP Proxy usernameHTTP_PROXY_PASSWORD
- (optional) HTTP Proxy password
Starting the container:
docker run -p 8181:8181 -p 2181:2181 -p 2888:2888 -p 3888:3888 \
-e S3_BUCKET=<bucket> \
-e S3_PREFIX=<key_prefix> \
-e AWS_ACCESS_KEY_ID=<access_key> \
-e AWS_SECRET_ACCESS_KEY=<secret_key> \
-e HOSTNAME=<host> \
simenduev/zookeeper-exhibitor:latest
Once the container is up, confirm Exhibitor is running:
$ curl -s localhost:8181/exhibitor/v1/cluster/status | python -m json.tool
[
{
"code": 3,
"description": "serving",
"hostname": "<host>",
"isLeader": true
}
]
See Exhibitor's wiki for more details on its REST API.
You can also check Exhibitor's web UI at http://<host>:8181/exhibitor/v1/ui/index.html
Then confirm ZK is available:
$ echo ruok | nc <host> 2181
imok
Exhibitor can also use an IAM Role attached to an instance instead of passing access or secret keys. This is an example policy that would be needed for the instance:
{
"Statement": [
{
"Resource": [
"arn:aws:s3:::exhibitor-bucket/*",
"arn:aws:s3:::exhibitor-bucket"
],
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:GetBucketAcl",
"s3:GetBucketPolicy",
"s3:GetObject",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:ListMultipartUploadParts",
"s3:PutObject",
"s3:PutObjectAcl"
],
"Effect": "Allow"
}
]
}
Starting the container:
docker run -p 8181:8181 -p 2181:2181 -p 2888:2888 -p 3888:3888 \
-e S3_BUCKET=<bucket> \
-e S3_PREFIX=<key_prefix> \
-e HOSTNAME=<host> \
simenduev/zookeeper-exhibitor:latest
The most important note is that in order to be able to mount GS bucket (via gcsfuse
), the container required to run in privileged mode.
Credentials for use with GCS will automatically be loaded using Google application default credentials, unless you mount a JSON key file.
After everything is in place, this is how you start the container with GCS support:
docker run -p 8181:8181 -p 2181:2181 -p 2888:2888 -p 3888:3888 \
--privileged \
-e GS_BUCKET=<bucket> \
-e GS_PREFIX=<key_prefix> \
-e HOSTNAME=<host> \
-v <path to json file>:/opt/exhibitor/key-file.json \
simenduev/zookeeper-exhibitor