Yolean/kubernetes-kafka

Latest Kafka not working for me

iwanskit opened this issue · 6 comments

Hi guys,

I have a problem with latest version of your project
and if you could help me out, I would be very grateful
(older versions was working fine without any problems).
I'm using minicube with kubernetes cluster on the local computer server.

Kubectl version:
kubectl_version

Now I'm starting this command flow:
-> sudo kubectl apply -f ./configure/minikube-storageclasses.yml
-> sudo kubectl apply -f ./00-namespace.yml
-> sudo kubectl apply -f ./zookeeper
here I have a small problems with deployments like on the screen below:
zookeeper
it takes some times but in the end everything goes green

-> sudo kubectl apply -f ./kafka
-> sudo kubectl apply -f ./outside-services
->sudo kubectl apply -f ./yahoo-kafka-manager

Now on each step yahoo reports error below:
yahoo1

I'm trying to use this configuration but it can't be processed:
yahoo2

Also of course I can't send any messages :(

Issues are quite possible after all the merges to master. Assuming it's master you're testing?

Troubleshooting needs more info. Kafka Manager is probably the next step after you know the cluster works. How did you try to produce? Is there any logs from brokers or zookeeper indicating failures?

Yes it is master branch latest version.

Of course logs:

Kafka broker shows many removed 0 expired offsets records after started:

[2019-04-04 14:32:03,191] INFO [KafkaServer id=0] started (kafka.server.KafkaServer) [2019-04-05 02:12:03,001] INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager) ....

beside this, I do not see anything special.

Zookeeper logs are much more interesting:
logs-from-zookeeper-in-zoo-0.txt
logs-from-zookeeper-in-pzoo-0.txt
(use e.g notepad++ for better view)

for consume&produce testing I have two ways.
First and most often chosen by me is node.js scripts from this side. I use nodePorts to send and recive messages.
Second is kafkakat

@solsson

If master have some issues
maybe you could suggest me to use another more stable branch?
Not necessarily with the latest version of kafka.

The first zookeeper pods will always fail to resolve those with higher index at initial start. You should check the logs of pzoo-2 and zoo-1 instead.

v5.1.0 is the latest tag

Thank you for your support @solsson.
My issue was fixed and I don't know what was the problem becouse I've made no changes at all.
Now kafka on local kubernetes is working but I have another issue...

I'm facing with:

Warning FailedScheduling 10m (x5 over 10m) default-scheduler PersistentVolumeClaim is not bound: "data-pzoo-2" (repeated 4 times)
Warning FailedScheduling 9m10s (x4 over 9m58s) default-scheduler 0/4 nodes are available: 1 NodeUnderDiskPressure, 1 PodToleratesNodeTaints, 3 NoVolumeZoneConflict.
Normal NotTriggerScaleUp 4m7s (x26 over 9m49s) cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added)
Warning FailedScheduling 4m (x17 over 8m10s) default-scheduler 0/5 nodes are available: 1 NodeUnderDiskPressure, 1 PodToleratesNodeTaints, 4 NoVolumeZoneConflict.

on AWS
I would appreciate for any suggestions.

The same issue... the fifth deployment attempt was successful :|