DataDog/pupernetes

Display more logs when polling the kube-apiserver

JulienBalestra opened this issue · 1 comments

Today, when starting pupernetes the logging is like:

sudo ./pupernetes run sandbox/
I0522 11:35:14.995714   27067 clean.go:32] Removed /home/jb/go/src/github.com/DataDog/pupernetes/sandbox/etcd-data
I0522 11:35:14.995893   27067 clean.go:143] Cleanup successfully finished
I0522 11:35:15.177233   27067 systemd.go:108] Already created systemd unit: p8s-kubelet.service, untouched
I0522 11:35:15.177352   27067 systemd.go:108] Already created systemd unit: p8s-etcd.service, untouched
I0522 11:35:17.110324   27067 setup.go:272] Setup ready /home/jb/go/src/github.com/DataDog/pupernetes/sandbox
I0522 11:35:17.110564   27067 run.go:63] Timeout for this current run is 6h0m0s
I0522 11:35:17.110611   27067 systemd.go:40] Starting systemd unit: p8s-etcd.service ...
I0522 11:35:17.846860   27067 systemd.go:40] Starting systemd unit: p8s-kubelet.service ...
I0522 11:35:18.853324   27067 state.go:35] Kubenertes apiserver not ready yet: Get http://127.0.0.1:8080/healthz: dial tcp 127.0.0.1:8080: connect: connection refused
I0522 11:35:26.855198   27067 state.go:35] Kubenertes apiserver not ready yet: bad status code for http://127.0.0.1:8080/healthz: 500
I0522 11:35:29.855681   27067 kubectl.go:14] Calling kubectl apply -f /home/jb/go/src/github.com/DataDog/pupernetes/sandbox/manifest-api ...
I0522 11:35:30.163899   27067 kubectl.go:21] Successfully applied manifests:
serviceaccount "coredns" created
clusterrole.rbac.authorization.k8s.io "system:coredns" created
clusterrolebinding.rbac.authorization.k8s.io "system:coredns" created
configmap "coredns" created
deployment.extensions "coredns" created
service "coredns" created
serviceaccount "kube-controller-manager" created
pod "kube-controller-manager" created
daemonset.extensions "kube-proxy" created
daemonset.extensions "kube-scheduler" created
clusterrolebinding.rbac.authorization.k8s.io "p8s-admin" created
I0522 11:35:30.164729   27067 notify.go:22] Not running in systemd service, skipping the notify
I0522 11:35:30.164791   27067 run.go:143] Pupernetes is ready

A lot of things could happen during:

I0522 11:35:18.853324   27067 state.go:35] Kubenertes apiserver not ready yet: Get http://127.0.0.1:8080/healthz: dial tcp 127.0.0.1:8080: connect: connection refused
I0522 11:35:26.855198   27067 state.go:35] Kubenertes apiserver not ready yet: bad status code for http://127.0.0.1:8080/healthz: 500

It could be interesting to display a more detailed state of the Kubernetes apiserver readiness like:

  • Phase / Condition of the static Pod
  • Restart counter
  • Even the logs of the container accessible via the kubelet logs API

This could be fixed with #37 as it moves the kube-apiserver to systemd.