Docker Kafka Connect image for the Confluent Open Source Platform using Oracle JDK
3.2.2
(3.2.2/Dockerfile)3.3.0
(3.3.0/Dockerfile)3.3.1
,latest
(3.3.1/Dockerfile)
All tag names follow the naming convention of the Confluent Open Source Platform
- Debian "slim" image variant
- Oracle JDK 8u152 addded, without MissionControl, VisualVM, JavaFX, ReadMe files, source archives, etc.
- Oracle Java Cryptography Extension added
- Python 2.7.9-1 & pip 9.0.1 added
- SHA 256 sum checks for all downloads
- JAVA_HOME environment variable set up
- Utility scripts added:
- Confluent utility belt script ('cub') - a Python CLI for a Confluent tool called docker-utils
- Docker utility belt script ('dub')
- Apache Kafka Connect added:
- version 0.10.2.1 in
3.2.2
- version 0.11.0.0 in
3.3.0
- version 0.11.0.1 in
3.3.1
andlatest
- version 0.10.2.1 in
This image was created with the sole purpose of offering the Confluent Open Source Platform running on top of Oracle JDK. Therefore, it follows the same structure as the one from the original repository. More precisely:
Apart of the base image (mbe1224/confluent-kafka), it has Apache Kafka Connect related packages, plus the Schema Registry added on top of it, installed using the following Confluent Debian package:
confluent-schema-registry-2.11
confluent-kafka-connect-jdbc-2.11
confluent-kafka-connect-hdfs-2.11
confluent-kafka-connect-elasticsearch-2.11
confluent-kafka-connect-storage-common-2.11
confluent-kafka-connect-s3-2.11
Build the image
docker build -t mbe1224/confluent-kafka-connect ./3.3.1/
Run the container
docker run -d \
--name=kafka-connect \
--net=host \
-e CONNECT_BOOTSTRAP_SERVERS=localhost:29092 \
-e CONNECT_REST_PORT=28082 \
-e CONNECT_GROUP_ID="quickstart" \
-e CONNECT_CONFIG_STORAGE_TOPIC="quickstart-config" \
-e CONNECT_OFFSET_STORAGE_TOPIC="quickstart-offsets" \
-e CONNECT_STATUS_STORAGE_TOPIC="quickstart-status" \
-e CONNECT_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_INTERNAL_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_INTERNAL_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_REST_ADVERTISED_HOST_NAME="localhost"
mbe1224/confluent-kafka-connect
One can use the following environment variables for configuring the ZooKeeper node:
# | Name | Default value | Meaning | Comments |
---|---|---|---|---|
1 | CONNECT_BOOTSTRAP_SERVERS | - | A unique string that identifies the Connect cluster group this worker belongs to | - |
2 | CONNECT_CONFIG_STORAGE_TOPIC | - | The name of the topic in which to store connector and task configuration data | This must be the same for all workers with the same group.id |
3 | CONNECT_CUB_KAFKA_MIN_BROKERS | 1 | Expected number of brokers in the cluster | Check the Confluent utility belt script ('cub') - check_kafka_ready for more details |
4 | CONNECT_CUB_KAFKA_TIMEOUT | 40 | Time in secs to wait for the number of Kafka nodes to be available | Check the Confluent utility belt script ('cub') - check_kafka_ready for more details |
5 | CONNECT_GROUP_ID | - | A unique string that identifies the Connect cluster group this worker belongs to | - |
6 | CONNECT_INTERNAL_KEY_CONVERTER | - | Converter class for internal keys that implements the Converter interface | - |
7 | CONNECT_INTERNAL_VALUE_CONVERTER | - | Converter class for internal values that implements the Converter interface | - |
8 | CONNECT_KEY_CONVERTER | - | Converter class for keys. This controls the format of the data that will be written to Kafka for source connectors or read from Kafka for sink connectors | - |
9 | CONNECT_LOG4J_LOGGERS | - | - | - |
10 | CONNECT_LOG4J_ROOT_LOGLEVEL | INFO | - | - |
11 | CONNECT_OFFSET_STORAGE_TOPIC | - | The name of the topic in which to store offset data for connectors | This must be the same for all workers with the same group.id |
12 | CONNECT_REST_ADVERTISED_HOST_NAME | - | Advertised host name is how Connect gives out a host name that can be reached by the client | - |
13 | CONNECT_REST_PORT | 8083 | Port for incomming connections | - |
14 | CONNECT_STATUS_STORAGE_TOPIC | - | The name of the topic in which to store state for connectors | This must be the same for all workers with the same group.id |
15 | CONNECT_VALUE_CONVERTER | - | Converter class for values. This controls the format of the data that will be written to Kafka for source connectors or read from Kafka for sink connectors | - |
Moreover, one can use any of the properties specified in the Apache Kafka Connect Configuration Options by replacing "." with "_" and appending "CONNECT_" before the property name. For example, instead of config.storage.topic
use CONNECT_CONFIG_STORAGE_TOPIC
.