all pod stuck in Pending
yueyuan4 opened this issue · 5 comments
Hi,I did this in order
kubectl apply -f configure/docker-storageclass-broker.yml
kubectl apply -f configure/docker-storageclass-zookeeper.yml
kubectl apply -f 00-namespace.yml
kubectl apply -f rbac-namespace-default/
kubectl apply -f zookeeper/
kubectl apply -f kafka/
I find all pod stuck in Pending
kubectl -n kafka get all
NAME READY STATUS RESTARTS AGE
pod/kafka-0 0/1 Pending 0 7m27s
pod/kafka-1 0/1 Pending 0 7m27s
pod/kafka-2 0/1 Pending 0 7m27s
pod/pzoo-0 0/1 Pending 0 7m35s
pod/pzoo-1 0/1 Pending 0 7m35s
pod/pzoo-2 0/1 Pending 0 7m35s
pod/zoo-0 0/1 Pending 0 7m35s
pod/zoo-1 0/1 Pending 0 7m35s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/bootstrap ClusterIP 10.103.133.150 <none> 9092/TCP 7m27s
service/broker ClusterIP None <none> 9092/TCP 7m27s
service/pzoo ClusterIP None <none> 2888/TCP,3888/TCP 7m35s
service/zoo ClusterIP None <none> 2888/TCP,3888/TCP 7m35s
service/zookeeper ClusterIP 10.104.4.80 <none> 2181/TCP 7m35s
NAME DESIRED CURRENT AGE
statefulset.apps/kafka 3 3 7m27s
statefulset.apps/pzoo 3 3 7m35s
statefulset.apps/zoo 2 2 7m35s
kubectl -n kafka describe pod zoo-0
Name: zoo-0
Namespace: kafka
Priority: 0
PriorityClassName: <none>
Node: <none>
Labels: app=zookeeper
controller-revision-hash=zoo-7d44fdcc4b
statefulset.kubernetes.io/pod-name=zoo-0
storage=persistent-regional
Annotations: <none>
Status: Pending
IP:
Controlled By: StatefulSet/zoo
Init Containers:
init-config:
Image: solsson/kafka-initutils@sha256:f6d9850c6c3ad5ecc35e717308fddb47daffbde18eb93e98e031128fe8b899ef
Port: <none>
Host Port: <none>
Command:
/bin/bash
/etc/kafka-configmap/init.sh
Environment:
ID_OFFSET: 4
Mounts:
/etc/kafka from config (rw)
/etc/kafka-configmap from configmap (rw)
/var/lib/zookeeper from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zrcnv (ro)
Containers:
zookeeper:
Image: solsson/kafka:2.2.0@sha256:cf048d6211b6b48f1783f97cb41add511386e2f0a5f5c62fa0eee9564dcd3e9a
Ports: 2181/TCP, 2888/TCP, 3888/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Command:
./bin/zookeeper-server-start.sh
/etc/kafka/zookeeper.properties
Limits:
memory: 120Mi
Requests:
cpu: 10m
memory: 100Mi
Readiness: exec [/bin/sh -c [ "imok" = "$(echo ruok | nc -w 1 -q 1 127.0.0.1 2181)" ]] delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
KAFKA_LOG4J_OPTS: -Dlog4j.configuration=file:/etc/kafka/log4j.properties
Mounts:
/etc/kafka from config (rw)
/var/lib/zookeeper from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zrcnv (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-zoo-0
ReadOnly: false
configmap:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: zookeeper-config
Optional: false
config:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
default-token-zrcnv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-zrcnv
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 9m1s (x10 over 9m22s) default-scheduler pod has unbound immediate PersistentVolumeClaims
Thank you for your help.
Thanks for your help.I try again and got this new error.
error: error validating "rbac-namespace-default/kustomization.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
my kubectl version is as flowing.
kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.4", GitCommit:"f49fa022dbe63faafd0da106ef7e05a29721d3f1", GitTreeState:"clean", BuildDate:"2018-12-14T07:10:00Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.4", GitCommit:"f49fa022dbe63faafd0da106ef7e05a29721d3f1", GitTreeState:"clean", BuildDate:"2018-12-14T06:59:37Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
I can't upgrade it to v1.14+ to solve this problem. I feel sorry about it.
I'm curious if you see any indication that the setup failed because of this error?
Our options at the moment are to restructure the repo to avoid this message or to simply wait until most users have kubectl 1.14. I'm still hoping for a good way to avoid the warning without any of these two.
Feel free to use a pre-6.0 release.