kafka.common.InconsistentBrokerIdException: Configured broker.id 1 doesn't match stored broker.id 0 in meta.properties. I
paltaa opened this issue · 6 comments
So got the zookeeper cluster running, they manage to elect a master, then when i run the kafka cluster the first pod runs alright, then the second pod, kafka-1 gets this error:
[2018-10-16 17:13:30,729] FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.InconsistentBrokerIdException: Configured broker.id 1 doesn't match stored broker.id 0 in meta.properties. If you moved your data, make sure your configured broker.id matches. If you intend to create a new broker, you should remove all data in your data directories (log.dirs).
at kafka.server.KafkaServer.getBrokerIdAndOfflineDirs(KafkaServer.scala:628)
at kafka.server.KafkaServer.startup(KafkaServer.scala:201)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:92)
at kafka.Kafka.main(Kafka.scala)
[2018-10-16 17:13:30,755] INFO shutting down (kafka.server.KafkaServer)
[2018-10-16 17:13:30,765] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2018-10-16 17:13:30,777] INFO EventThread shut down for session: 0x1667dceb4710001 (org.apache.zookeeper.ClientCnxn)
[2018-10-16 17:13:30,777] INFO Session: 0x1667dceb4710001 closed (org.apache.zookeeper.ZooKeeper)
[2018-10-16 17:13:30,782] INFO shut down completed (kafka.server.KafkaServer)
[2018-10-16 17:13:30,788] FATAL Exiting Kafka. (kafka.server.KafkaServerStartable)
[2018-10-16 17:13:30,789] INFO shutting down (kafka.server.KafkaServer)
If you have changed the number of replicas of pzoo you may also need to change the start index for init.sh, https://github.com/Yolean/kubernetes-kafka/blob/v4.2.0/zookeeper/51zoo.yml#L29
@solsson okay, so got 3 replicas of zoo from 5 to 3 , so should change the number to 3? got in on 1, and the zookeeper runs altight, if changed to anything else it crashes
I think you can do trial-and-error and document your findings here.
okay, but still im not understanding what does that ID.OFFSET variable is doing, could you explain a little further?
It's used in the configmap's init.sh
Okay, found the error, was on the persistent volumes, was using local storage from hosts with 3 different pv's for each kafka pod, problem was they were mounting on the same directory, so changed them to kafka-0, kafka-1 and kafka-2 respectively.
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: kafka-pv-0
namespace: whitenfv
labels:
type: local
spec:
storageClassName: kafka
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/lib/whitenfv/kafka-0/data"
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: kafka-pv-1
namespace: whitenfv
labels:
type: local
spec:
storageClassName: kafka
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/lib/whitenfv/kafka-1/data"
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: kafka-pv-2
namespace: whitenfv
labels:
type: local
spec:
storageClassName: kafka
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/lib/whitenfv/kafka-2/data"
So it wasnt a problem related to the id.offset, thanks anyways for the help @solsson