alexellis/k8s-on-raspbian

Dashboard in PI Cluster

moficodes opened this issue · 11 comments

Expected Behaviour

Get a dashboard up and running

Current Behaviour

The instruction to get a dashboard in pi cluster does not work.
The URL for alternate dashboard returns 404.

Possible Solution

Steps to Reproduce (for bugs)

  1. Use Weave Net Network Driver
  2. https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/ for this url for the dashboard setup.
  3. Dashboard pod never starts

Context

Doing this for learning purposes. Wanted to see if the dashboard can be setup in an arm hardware.

Your Environment

Raspberry pi model 3 b+

  • Docker version docker version (e.g. Docker 17.0.05 ):
    Docker version 18.09.0, build 4d60db4

  • What version of Kubernetes are you using? kubectl version:
    kubectl version 1.13.1
    kubedm version 1.13.1
    kubelet version 1.13.1

  • Operating System and version (e.g. Linux, Windows, MacOS):
    PRETTY_NAME="Raspbian GNU/Linux 9 (stretch)"
    NAME="Raspbian GNU/Linux"
    VERSION_ID="9"
    VERSION="9 (stretch)"
    ID=raspbian
    ID_LIKE=debian
    HOME_URL="http://www.raspbian.org/"
    SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
    BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"

Hi thanks for getting in touch.

What do you get from kubectl get events --sort-by=.metadata.creationTimestamp --all-namespaces? f it's over 10 lines please use a Gist for the output.

You could also try kubectl get pod --all-namespaces to see what is not starting and then use kubectl describe for more info.

kubectl get pod --all-namespaces outputs the following

NAMESPACE     NAME                                   READY     STATUS             RESTARTS   AGE
kube-system   coredns-86c58d9df4-9hx5c               1/1       Running            0          23h
kube-system   coredns-86c58d9df4-nfgk5               1/1       Running            0          23h
kube-system   etcd-k8s-master-1                      1/1       Running            0          23h
kube-system   kube-apiserver-k8s-master-1            1/1       Running            0          23h
kube-system   kube-controller-manager-k8s-master-1   1/1       Running            0          23h
kube-system   kube-proxy-4k2mc                       1/1       Running            0          23h
kube-system   kube-proxy-9xbrw                       1/1       Running            0          23h
kube-system   kube-proxy-l6kz6                       1/1       Running            0          23h
kube-system   kube-proxy-rfntk                       1/1       Running            0          23h
kube-system   kube-scheduler-k8s-master-1            1/1       Running            0          23h
kube-system   kubernetes-dashboard-57df4db6b-ftc6v   0/1       CrashLoopBackOff   6          10m
kube-system   weave-net-5rmmn                        2/2       Running            1          23h
kube-system   weave-net-7ncw5                        2/2       Running            0          23h
kube-system   weave-net-ccvwc                        2/2       Running            0          23h
kube-system   weave-net-fknrp                        2/2       Running            0          23h

This is the output from kubectl describe on the failining pod

Name:               kubernetes-dashboard-57df4db6b-ftc6v
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               k8s-node-3/192.168.0.103
Start Time:         Tue, 01 Jan 2019 12:42:10 -0500
Labels:             k8s-app=kubernetes-dashboard
                    pod-template-hash=57df4db6b
Annotations:        <none>
Status:             Running
IP:                 10.47.0.1
Controlled By:      ReplicaSet/kubernetes-dashboard-57df4db6b
Containers:
  kubernetes-dashboard:
    Container ID:  docker://75e26f8dfb8731f09296c646afc380cc2640003dfafdac4f3f5489288dc14ed6
    Image:         k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
    Image ID:      docker-pullable://k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      --auto-generate-certificates
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 01 Jan 2019 12:48:56 -0500
      Finished:     Tue, 01 Jan 2019 12:48:56 -0500
    Ready:          False
    Restart Count:  6
    Liveness:       http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /certs from kubernetes-dashboard-certs (rw)
      /tmp from tmp-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-jd5sb (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kubernetes-dashboard-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-certs
    Optional:    false
  tmp-volume:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
  kubernetes-dashboard-token-jd5sb:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-token-jd5sb
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age               From                 Message
  ----     ------     ----              ----                 -------
  Normal   Scheduled  8m                default-scheduler    Successfully assigned kube-system/kubernetes-dashboard-57df4db6b-ftc6v to k8s-node-3
  Normal   Pulling    8m                kubelet, k8s-node-3  pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1"
  Normal   Pulled     7m                kubelet, k8s-node-3  Successfully pulled image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1"
  Normal   Started    6m (x4 over 7m)   kubelet, k8s-node-3  Started container
  Normal   Pulled     5m (x4 over 7m)   kubelet, k8s-node-3  Container image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1" already present on machine
  Normal   Created    5m (x5 over 7m)   kubelet, k8s-node-3  Created container
  Warning  BackOff    3m (x25 over 7m)  kubelet, k8s-node-3  Back-off restarting failed container

The output for kubectl get events --sort-by=.metadata.creationTimestamp --all-namespaces

NAMESPACE     LAST SEEN   FIRST SEEN   COUNT     NAME                                                    KIND         SUBOBJECT                               TYPE      REASON              SOURCE                  MESSAGE
kube-system   11m         11m          1         kubernetes-dashboard-57df4db6b-ftc6v.1575ca58f45ef129   Pod                                                  Normal    Scheduled           default-scheduler       Successfully assigned kube-system/kubernetes-dashboard-57df4db6b-ftc6v to k8s-node-3
kube-system   11m         11m          1         kubernetes-dashboard-57df4db6b.1575ca58ebae7eb2         ReplicaSet                                           Normal    SuccessfulCreate    replicaset-controller   Created pod: kubernetes-dashboard-57df4db6b-ftc6v
kube-system   11m         11m          1         kubernetes-dashboard.1575ca58e4964d57                   Deployment                                           Normal    ScalingReplicaSet   deployment-controller   Scaled up replica set kubernetes-dashboard-57df4db6b to 1
kube-system   11m         11m          1         kubernetes-dashboard-57df4db6b-ftc6v.1575ca599c67c84c   Pod          spec.containers{kubernetes-dashboard}   Normal    Pulling             kubelet, k8s-node-3     pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1"
kube-system   10m         10m          1         kubernetes-dashboard-57df4db6b-ftc6v.1575ca676bd83518   Pod          spec.containers{kubernetes-dashboard}   Normal    Pulled              kubelet, k8s-node-3     Successfully pulled image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1"
kube-system   9m          10m          5         kubernetes-dashboard-57df4db6b-ftc6v.1575ca6894db3dd3   Pod          spec.containers{kubernetes-dashboard}   Normal    Created             kubelet, k8s-node-3     Created container
kube-system   9m          10m          4         kubernetes-dashboard-57df4db6b-ftc6v.1575ca68c23b08ab   Pod          spec.containers{kubernetes-dashboard}   Normal    Started             kubelet, k8s-node-3     Started container
kube-system   9m          10m          4         kubernetes-dashboard-57df4db6b-ftc6v.1575ca69065f4ec7   Pod          spec.containers{kubernetes-dashboard}   Normal    Pulled              kubelet, k8s-node-3     Container image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1" already present on machine
kube-system   1m          10m          48        kubernetes-dashboard-57df4db6b-ftc6v.1575ca698da0d203   Pod          spec.containers{kubernetes-dashboard}   Warning   BackOff             kubelet, k8s-node-3     Back-off restarting failed container```

Hi,

It looks like there is a problem with the version of the dashboard that the Kubernetes community is maintaining. Their version named "head" seems to work fine so I've updated the guide.

Since I have this working now I'll close the issue - please try out the new instructions and let me know how you get on.

Alex

I am maybe doing this wrong.

I got the dashboard running. I then tried to access it via the url http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

Got a 404. Then I changed the name to http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard-head:/proxy/ and got a

Error: 'tls: oversized record received with length 20527'
Trying to reach: 'https://10.47.0.1:9090/'

Then I changed https to http
So the final URL became http://localhost:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard-head:/proxy/
And this got me to the dashboard. But there is nothing in the Dashboard. Are there any additional steps required to get my nodes, pods and services to show up in the dash board.

I appreciate the tutorial and you help. Probably missing something silly.

@alexellis Take a look when you get some down time.

kubectl create clusterrolebinding kubernetes-dashboard-head --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard-head
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-head created

Adding this made it work for me.

That step was documented - it's the large section of YAML. It should work for you as it worked for me on a brand new cluster. It's the step before the http kubectl apply -f statement.

I used kubectl port-forward instead of proxy.

Thats what I missed. Proxy works with the clusterrole 😃
Thanks a bunch alex. 🎉