kubectl not in container
Closed this issue · 2 comments
Hey,
when the kafka gets initialized it fails to populate some values with init script. I end up with this error in . server.properties:
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0
#init#broker.rack=# zone lookup failed, see -c init-config logs
i checked on the script in kafka/10broker-config.yml and checked the commands on by one and before the error it also seems to check the presence of kubectl in path:
hash kubectl 2>/dev/null || {
sed -i "s/#init#broker.rack=#init#/#init#broker.rack=# kubectl not found in path/" /etc/kafka/server.properties
when I check inside kafka-0 container:
root@kafka-0:/opt/kafka# echo $HOSTNAME
kafka-0
root@kafka-0:/opt/kafka# echo $NODE_NAME
root@kafka-0:/opt/kafka# kubectl
bash: kubectl: command not found
root@kafka-0:/opt/kafka#
manual search for kubectl file in container also has no results. i didn't think it needs to be there, but as mentioned before: /kafka/10broker-config.yml:
hash kubectl 2>/dev/null || { sed -i "s/#init#broker.rack=#init#/#init#broker.rack=# kubectl not found in path/" /etc/kafka/server.properties
what also fails (is probably related) is that
what am I doing wrong?
my main issue here is that it does not set the outside names (advertised listeners):
#init#advertised.listeners=OUTSIDE://#init#,PLAINTEXT://:9092
Kalli
As the comment says, use kubectl logs -c init-config
to see logs. The init container is already gone at this time, but if you add for example tail -f /dev/null
at the bottom of init.sh
you'll stop the initialization at thtat point and you can exec -c init-config
.
ahhh man !!!! thanks so much for answering!
checked init logs ->
+ ANNOTATIONS=
+ hash kubectl
++ kubectl get node r3-k8s083 '-o=go-template={{index .metadata.labels "failure-domain.beta.kubernetes.io/zone"}}'
Error from server (Forbidden): nodes "r3-k8s083" is forbidden: User "system:serviceaccount:kafka:default" cannot get nodes at the cluster scope
+ ZONE=
RBAC, i forgot to apply your RBAC configs in my K8s. So i went and applied all in folder: rbac-namespace-default
Then I re-deployed the kafka
kubectl delete statefulset kafka -n=kafka
kubectl apply -f ~/kafka/kafka/00namespace.yml
kubectl apply -f ~/kafka/kafka/10broker-config.yml
kubectl apply -f ~/kafka/kafka/20dns.yml
kubectl apply -f ~/kafka/kafka/30bootstrap-service.yml
kubectl apply -f ~/kafka/kafka/50kafka.yml
and once deployed, I finally get annotations on my pods:
[root@xxxx kafka]# kubectl describe pod kafka-0 -n=kafka
Name: kafka-0
Namespace: kafka
Node: <myHostName>/<myIP>
Start Time: Fri, 27 Apr 2018 15:09:18 +0100
Labels: app=kafka
controller-revision-hash=kafka-6fd9fcf77f
kafka-broker-id=0
statefulset.kubernetes.io/pod-name=kafka-0
Annotations: kafka-listener-outside-host=<myHostName>
kafka-listener-outside-port=32400
also the kafka-broker-id is also in the label, so I don't have to bend everything around manually .. wooohooo!
for anyone who has problems with exposing the kafka with the outside-0.yaml outside-1.yaml and outside-2.yaml, check first if the pod has its proper label and annotations. If not, service will never work. Cherck logs as described above to get it sorted
Thanks again for your help!!!
Kalli