Yolean/kubernetes-kafka

Using local storage on SAN

Closed this issue · 9 comments

Hi,
I'm new to Kubernetes so hope this doesn't cause a problem for you, if so, please let me know.
I am using your template to setup Zookeeper/Kafka on our own Kubernetes environment and modified it in such a way that data and config write to local disk on the host (LUN presented to all 6 hosts).
When I 'apply' the Stateful Sets for pzoo, zoo and kafka, I see that directories are created correctly, but no config files (configMag) are generated, hence the containers fail to start with error: Back-off restarting failed container.

The error in the docker logs : /bin/bash: /etc/kafka/init.sh: No such file or directory

Any clue on what I might be doing wrong? This is what I modified:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: kafka-zookeeper-config
  labels:
    type: local
spec:
  storageClassName: zookeeper-config
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  hostPath:
    path: "/kubernetes/zk/config"

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: zookeeper-config-claim
spec:
  storageClassName: zookeeper-config
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 3Gi

in 50pzoo.yml:

      volumes:
      - name: config
        configMap:
          name: zookeeper-config
        persistentVolumeClaim:
          claimName: zookeeper-config-claim
      - name: data
        persistentVolumeClaim:
          claimName: zookeeper-data-claim

Any help is highly appreciated.

Kind regards,

Eric V.

Is the configmap mount https://github.com/Yolean/kubernetes-kafka/blob/v3.0.1/kafka/50kafka.yml#L73 still present after your changes? /bin/bash: /etc/kafka/init.sh: No such file or directory indicates that it's not.

How you provision your other volumes is probably unrelated, but I'm curious if you're using local volumes? I tested that with K8s 1.9 in Yolean/kubeadm-vagrant@ee897ea.

Thanks for your reply!
Yes it is still present:

        volumeMounts:
        - name: config
          mountPath: /etc/kafka
        - name: data
          mountPath: /var/lib/kafka/data

I am using a LUN presented to all six nodes in my cluster, mounted under /kubernetes.
I'm trying to get familiar with as many layers of Kubernetes as possible but the learning curve is pretty steep :-). I'm running K8S 1.8.3 on Rancher.

Kind regards,

Eric V.

Is it present in both containers? What does kubectl --namespace kafka get configmap broker-config tell you? Three files in there? How about kubectl --namespace kafka get configmap broker-config -o yaml | grep '|-'.

Did zookeeper start by the way? Maybe this project is a bit too heavy for getting acquanted with k8s. There are a lot simpler StatefulSet examples out there.

Hi,
Thanks again for your reply. Nothing starts up, I'm taking one step at a time so pzoo is the first StatefulSet trying to start. Below is the output from the commands you asked and the error I get on the StatefulSet

kubectl --kubeconfig=/Users/evsteenbergen/.kube/config-ipvg get configmap broker-config --namespace=kafka                                                                    git:master*
NAME            DATA      AGE
broker-config   3         2d

kubectl --kubeconfig=/Users/evsteenbergen/.kube/config-ipvg get configmap broker-config -o yaml --namespace=kafka | grep '|-'                                                git:master*
  init.sh: |-
  log4j.properties: |-
  server.properties: |-

create Pod pzoo-0 in StatefulSet pzoo failed error: Pod "pzoo-0" is invalid: [spec.volumes[0].configMap: Forbidden: may not specify more than 1 volume type, spec.containers[0].volumeMounts[0].name: Not found: "config", spec.initContainers[0].volumeMounts[0].name: Not found: "config"]

Kind regards,

Eric V.

Based on the last line of the output above I think something is wrong with your manifests after volume spec changes.

Hi,
I think I found the issue.... I tested the exact playbook locally on my Mac running Docker for Mac Beta which comes with Kubernetes and had no error at all So I started reading documentation again and found out that hostPath (which I'm using) cannot be used in a Kubernetes cluster, only with a one node setup. So now I'm setting up a GlusterFS cluster which should be supported in a cluster.
I'll keep you posted. Thanks again for your feedback and valuable help.

Kind regards,

Eric V.

Ok if I close this, as not related to the setup in this repo?

Your findings may end up as a PR instead, like #104 #126 #130. Comments to this issue are ofc valuable too, even if it's closed.

sure, no problem at all, thank you again for all your time and assistance.