I am getting http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused. error.
wpoosanguansit opened this issue ยท 11 comments
Hi, I follow the the instruction until the init step. At that point I am getting:
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
Thanks for your help.
Context
Your Environment
- Docker version
docker version
(e.g. Docker 17.0.05 ):
sudo docker version
Client:
Version: 1.8.3
API version: 1.20
Go version: go1.4.3
Git commit: f4bf5c7
Built:
OS/Arch: linux/arm
Server:
Version: 1.8.3
API version: 1.20
Go version: go1.4.3
Git commit: f4bf5c7
Built:
OS/Arch: linux/arm
-
What version of Kubernetes are you using?
kubectl version
:
sudo kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:08:12Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/arm"}
The connection to the server localhost:8080 was refused - did you specify the right host or port? -
Operating System and version (e.g. Linux, Windows, MacOS):
Mac OS High Sierra
- What ARM or Raspberry Pi board are you using?
Raspberry Pi 3 B+
Intro
Alexellis, thank you for your work. I appreciate you love for documentation and I admire you that you have documented all the steps.
I have the same issue but would like to provide additional/better formatted data.
Expected Behaviour
When configuring the master node,
sudo kubeadm init --token-ttl=0
should succeed.
I tried applying sed
replacements described in
https://github.com/alexellis/k8s-on-raspbian/blob/master/GUIDE.md#initialize-your-master-node when the .yaml
files are generated.
Current Behaviour
pi@k8s-master-1:~ $ sudo kubeadm init --token-ttl=0
[init] Using Kubernetes version: v1.14.0
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master-1 localhost] and IPs [192.168.88.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-1 localhost] and IPs [192.168.88.10 127.0.0.1 ::1]
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.88.10]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
[...]
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
It stays at this point. I use sudo kubeadm reset
to reset the installation.
Possible Solution
I don't know. ๐
Maybe I should somehow delay [kubelet-check]
[...]
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[...]
Steps to Reproduce (for bugs)
- Follow https://github.com/alexellis/k8s-on-raspbian/blob/master/GUIDE.md#initialize-your-master-node
sudo kubeadm init --token-ttl=0
- wait until
[control-plane] Creating static Pod manifest for "kube-apiserver"
then apply inlinesed
substitutions.
Context
systemctl status kubelet
โ kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
โโ10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Mon 2019-04-08 16:31:21 BST; 6s ago
Docs: https://kubernetes.io/docs/home/
Process: 9304 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
Main PID: 9304 (code=exited, status=255)
CPU: 2.531s
Apr 08 16:31:21 k8s-master-1 systemd[1]: kubelet.service: Failed with result 'exit-code'.
journalctl -xeu kubelet
-- The start-up result is done.
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.393101 11134 server.go:417] Version: v1.14.0
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.396709 11134 plugins.go:103] No cloud provider specified.
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.397071 11134 server.go:754] Client rotation is on, will bootstrap in background
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.428405 11134 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: E0408 16:36:21.617689 11134 machine.go:288] failed to get cache information for node 0: open /sys/devices/system/cpu/cpu0/cache: no such file or directory
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.626248 11134 server.go:625] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.628372 11134 container_manager_linux.go:261] container manager verified user specified cgroup-root exists: []
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.628555 11134 container_manager_linux.go:266] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:do
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.629821 11134 container_manager_linux.go:286] Creating device plugin manager: true
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.630064 11134 state_mem.go:36] [cpumanager] initializing new in-memory state store
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.630825 11134 state_mem.go:84] [cpumanager] updated default cpuset: ""
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.630943 11134 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.631681 11134 kubelet.go:279] Adding pod path: /etc/kubernetes/manifests
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.631894 11134 kubelet.go:304] Watching apiserver
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: E0408 16:36:21.656102 11134 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Get https://192.168.88.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: E0408 16:36:21.658924 11134 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Get https://192.168.88.10:6443/api/v1/services?limit=500&resourceVersion=0:
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: E0408 16:36:21.659089 11134 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.88.10:6443/api/v1/pods?fieldSelector=spec.nodeName%
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.670977 11134 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.671142 11134 client.go:104] Start docker client with request timeout=2m0s
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: W0408 16:36:21.686627 11134 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.686855 11134 docker_service.go:238] Hairpin mode set to "hairpin-veth"
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: W0408 16:36:21.688070 11134 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: W0408 16:36:21.721951 11134 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.722389 11134 docker_service.go:253] Docker cri networking managed by cni
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: W0408 16:36:21.722931 11134 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.843084 11134 docker_service.go:258] Docker Info: &{ID:O7JZ:NQVM:XFMI:GI7E:GG3W:65QB:PUSF:5CAZ:7QZR:CL3W:EQ23:DIEQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStop
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.843828 11134 docker_service.go:271] Setting cgroupDriver to cgroupfs
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.923486 11134 remote_runtime.go:62] parsed scheme: ""
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.923640 11134 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.923880 11134 remote_image.go:50] parsed scheme: ""
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.923979 11134 remote_image.go:50] scheme "" not registered, fallback to default scheme
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.924959 11134 asm_arm.s:868] ccResolverWrapper: sending new addresses to cc: [{/var/run/dockershim.sock 0 <nil>}]
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.925155 11134 clientconn.go:796] ClientConn switching balancer to "pick_first"
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.925439 11134 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0x85623b0, CONNECTING
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.926529 11134 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0x85623b0, READY
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.926826 11134 asm_arm.s:868] ccResolverWrapper: sending new addresses to cc: [{/var/run/dockershim.sock 0 <nil>}]
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.926936 11134 clientconn.go:796] ClientConn switching balancer to "pick_first"
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.927166 11134 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0x8562b90, CONNECTING
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.928058 11134 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0x8562b90, READY
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.942645 11134 kuberuntime_manager.go:210] Container runtime docker initialized, version: 18.09.0, apiVersion: 1.39.0
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: E0408 16:36:21.950470 11134 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: Get https://192.168.88.10:6443/apis/storage.k8s.io/v1beta1/csidrivers?l
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.952890 11134 server.go:1037] Started kubelet
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: E0408 16:36:21.953798 11134 kubelet.go:1282] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.955864 11134 server.go:141] Starting to listen on 0.0.0.0:10250
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: E0408 16:36:21.957875 11134 event.go:200] Unable to write event: 'Post https://192.168.88.10:6443/api/v1/namespaces/default/events: dial tcp 192.168.88.10:6443: connect: connection refused' (may
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.967077 11134 server.go:343] Adding debug handlers to kubelet server.
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.968478 11134 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.969338 11134 status_manager.go:152] Starting to sync pod status with apiserver
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.969775 11134 kubelet.go:1806] Starting kubelet main sync loop.
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.970487 11134 kubelet.go:1823] skipping pod synchronization - [container runtime status check may not have completed yet., PLEG is not healthy: pleg has yet to be successful.]
Apr 08 16:36:21 k8s-master-1 kubelet[11134]: I0408 16:36:21.985920 11134 volume_manager.go:248] Starting Kubelet Volume Manager
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: E0408 16:36:22.007267 11134 controller.go:115] failed to ensure node lease exists, will retry in 200ms, error: Get https://192.168.88.10:6443/apis/coordination.k8s.io/v1beta1/namespaces/kube-node
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: I0408 16:36:22.016133 11134 desired_state_of_world_populator.go:130] Desired state populator starts to run
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: E0408 16:36:22.027184 11134 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitial
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: E0408 16:36:22.028674 11134 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Get https://192.168.88.10:6443/apis/node.k8s.io/v1beta1/runtimeclass
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: I0408 16:36:22.073113 11134 kubelet.go:1823] skipping pod synchronization - container runtime status check may not have completed yet.
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: E0408 16:36:22.088459 11134 kubelet.go:2244] node "k8s-master-1" not found
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: I0408 16:36:22.091127 11134 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: I0408 16:36:22.134072 11134 kubelet_node_status.go:72] Attempting to register node k8s-master-1
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: E0408 16:36:22.137462 11134 kubelet_node_status.go:94] Unable to register node "k8s-master-1" with API server: Post https://192.168.88.10:6443/api/v1/nodes: dial tcp 192.168.88.10:6443: connect:
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: I0408 16:36:22.144922 11134 clientconn.go:440] parsed scheme: "unix"
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: I0408 16:36:22.147586 11134 clientconn.go:440] scheme "unix" not registered, fallback to default scheme
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: I0408 16:36:22.148932 11134 asm_arm.s:868] ccResolverWrapper: sending new addresses to cc: [{unix:///run/containerd/containerd.sock 0 <nil>}]
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: I0408 16:36:22.149092 11134 clientconn.go:796] ClientConn switching balancer to "pick_first"
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: I0408 16:36:22.149437 11134 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0x8b39920, CONNECTING
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: I0408 16:36:22.150806 11134 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0x8b39920, READY
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: W0408 16:36:22.168894 11134 nvidia.go:66] Error reading "/sys/bus/pci/devices/": open /sys/bus/pci/devices/: no such file or directory
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: E0408 16:36:22.194171 11134 kubelet.go:2244] node "k8s-master-1" not found
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: E0408 16:36:22.211106 11134 controller.go:115] failed to ensure node lease exists, will retry in 400ms, error: Get https://192.168.88.10:6443/apis/coordination.k8s.io/v1beta1/namespaces/kube-node
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: I0408 16:36:22.295595 11134 kubelet.go:1823] skipping pod synchronization - container runtime status check may not have completed yet.
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: E0408 16:36:22.298382 11134 kubelet.go:2244] node "k8s-master-1" not found
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: I0408 16:36:22.337945 11134 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: I0408 16:36:22.354840 11134 kubelet_node_status.go:72] Attempting to register node k8s-master-1
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: E0408 16:36:22.357620 11134 kubelet_node_status.go:94] Unable to register node "k8s-master-1" with API server: Post https://192.168.88.10:6443/api/v1/nodes: dial tcp 192.168.88.10:6443: connect:
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: E0408 16:36:22.403820 11134 kubelet.go:2244] node "k8s-master-1" not found
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: E0408 16:36:22.504277 11134 kubelet.go:2244] node "k8s-master-1" not found
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: E0408 16:36:22.604752 11134 kubelet.go:2244] node "k8s-master-1" not found
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: E0408 16:36:22.613838 11134 controller.go:115] failed to ensure node lease exists, will retry in 800ms, error: Get https://192.168.88.10:6443/apis/coordination.k8s.io/v1beta1/namespaces/kube-node
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: I0408 16:36:22.641551 11134 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: I0408 16:36:22.657770 11134 cpu_manager.go:155] [cpumanager] starting with none policy
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: I0408 16:36:22.657902 11134 cpu_manager.go:156] [cpumanager] reconciling every 10s
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: I0408 16:36:22.658083 11134 policy_none.go:42] [cpumanager] none policy: Start
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: E0408 16:36:22.659848 11134 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Get https://192.168.88.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-
Apr 08 16:36:22 k8s-master-1 kubelet[11134]: F0408 16:36:22.662743 11134 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level BestEffort QOS cgroup : failed to set suppo
Apr 08 16:36:22 k8s-master-1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Apr 08 16:36:22 k8s-master-1 systemd[1]: kubelet.service: Unit entered failed state.
Apr 08 16:36:22 k8s-master-1 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Your Environment
- Docker version
docker version
(e.g. Docker 17.0.05 ):
Client:
Version: 18.09.0
API version: 1.39
Go version: go1.10.4
Git commit: 4d60db4
Built: Wed Nov 7 00:57:21 2018
OS/Arch: linux/arm
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.0
API version: 1.39 (minimum version 1.12)
Go version: go1.10.4
Git commit: 4d60db4
Built: Wed Nov 7 00:17:57 2018
OS/Arch: linux/arm
Experimental: false
- What version of Kubernetes are you using?
kubectl version
:
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/arm"}
-
Operating System and version (e.g. Linux, Windows, MacOS):
Raspbian GNU/Linux 9
-
What ARM or Raspberry Pi board are you using?
RPi 2 B rev1.1
Experiencing the same issue as well Pi 3B
Running into the same problem on Ubuntu 16.04 with v1.15.0. Also, internet does not work after kubeadm init.
See my recommendation to use k3s in the main readme.
I'd encourage you to raise an issue with kubeadm in the meantime. https://github.com/kubernetes/kubeadm
I am experiencing the same issue with Ubuntu 18.04 and kubeadm v.1.15.3
I made sure that the port is open (with "sudo ufw allow 10248/tcp"), but still the same happens.
Note: I am using plain Kubernetes, not Kubernetes on Raspbian, but just found this issue when I searched for this error.
So I guess, it's a general problem with Kubernetes and not something Raspbian-specific.
Same problem Centos 7, kuber 1.16
I am experiencing the same issue with Ubuntu 18.04 and kubeadm v1.16.1:
kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:102
and the output gave me some advises:
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
after journalctl
, I got the failed message:
kubelet[27956]: F1014 19:15:43.743985 27956 server.go:271] failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"
Experimenting the same error
Experimenting the same error
Centos 7 the same
Please try k3s which is designed to accommodate and tested on RPi.
This repository is for Kubernetes on Raspbian, I would not expect CentOS and other distributions to work out of the box since they are not tested.
For technical support on kubeadm please join the Kubernetes Slack, PRs are welcome.