Installing a Kafka Cluster using containers is a quick way to get up and running. It's portable and lightweight, so we can use this on any machine running Docker. You'll see in this lesson, it takes much less time to get to the point where we can create our first topic. See the below commands for easily copying and pasting into your own terminal:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
sudo apt update
sudo apt install -y docker-ce=18.06.1~ce~3-0~ubuntu
sudo usermod -a -G docker cloud_user
sudo -i
curl -L https://github.com/docker/compose/releases/download/1.24.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
git clone https://github.com/linuxacademy/content-kafka-deep-dive.git
cd content-kafka-deep-dive
docker-compose up -d --build
sudo apt install -y default-jdk
wget http://mirror.cogentco.com/pub/apache/kafka/2.2.0/kafka_2.12-2.2.0.tgz
tar -xvf kafka_2.12-2.2.0.tgz
./bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic test --partitions 3 --replication-factor 1
./bin/kafka-topics.sh --zookeeper localhost:2181 --topic test --describe
Now that we've setup our Kafka cluster, let's explore some of the various commands for creating topics, and producing and consuming messages. In this lesson, we'll go over how to determine what flag to use, as well as how to use a combination of flags. Overall, the command line is friendly, giving verbose explanation when someone does something wrong.
bin/kafka-topics.sh
bin/kafka-topics.sh --zookeeper zookeeper1:2181/kafka --topic test1 --create --partitions 3 --replication-factor 3
bin/kafka-topics.sh --zookeeper zookeeper1:2181,zookeeper2:2181,zookeeper3:2181/kafka --topic test1 --create --partitions 3 --replication-factor 3
bin/kafka-topics.sh --zookeeper zookeeper1:2181/kafka --list
bin/kafka-topics.sh --zookeeper zookeeper1:2181/kafka --topic test2 --describe
bin/kafka-topics.sh --zookeeper zookeeper1:2181/kafka --topic test2 --delete
bin/kafka-console-producer.sh
bin/kafka-console-consumer.sh
bin/kafka-consumer-groups.sh
By using a Producer, you can publish messages to the Kafka cluster. In this lesson we'll produce some messages to the topics that we've created thus far. There are a few items to remember when creating topics and default partitions.
bin/kafka-console-producer.sh --broker-list kafka1:9092 --topic test
bin/kafka-console-producer.sh --broker-list kafka1:9092 --topic test --producer-property acks=all
bin/kafka-console-producer.sh --broker-list kafka1:9092 --topic test4
bin/kafka-topics.sh --zookeeper zookeeper1:2181/kafka --list
bin/kafka-topics.sh --zookeeper zookeeper1:2181/kafka --topic test5 --describe
Consumers are the only way to get messages out of a Kafka cluster. In this lesson, we'll retreive some of the messages that we've produced in the last lesson and learn a bit about how consumers keep track of their offset.
bin/kafka-console-consumer.sh --bootstrap-server kafka3:9092 --topic test
bin/kafka-console-consumer.sh --bootstrap-server kafka3:9092 --topic test --from-beginning
Kafka was meant to read multiple messages at once using consumer groups. This way, the speed at which messages are read increases. The consumers work very intelligently, in that they never read the same messages, and keep track of where they left off using the offset. In this lesson, we'll discover the power of consumer groups and how to describe their characteristics.
bin/kafka-console-consumer.sh --bootstrap-server kafka3:9092 --topic test --group application1
bin/kafka-console-producer.sh --broker-list kafka1:9092 --topic test
bin/kafka-console-consumer.sh --bootstrap-server kafka3:9092 --topic test --group application1 --from-beginning
bin/kafka-consumer-groups.sh --bootstrap-server kafka3:9092 --list
Describe a consumer group bin/kafka-consumer-groups.sh --bootstrap-server kafka3:9092 --describe --group application1