kubernetes/kubernetes

Networking: Pods are unable to communicate with: other pods, the API Server and the outside world

Closed this issue · 15 comments

Vonor commented

What happened:

On a freshly bootstrapped cluster the Pods cannot reach neither other pods, nor the API Server, nor the outside world

What you expected to happen:

Be able to reach the API Server as well as the outside world

How to reproduce it (as minimally and precisely as possible):

I tried with the following two variants. Result is the same.

1: NanoPi m4 as Master. Wife Connected to my Cable Modem. eth0 connected to separate switch for the node network. Nodes are Raspberry PI3 with Debian Buster and one amd64 based node.
Physical Network is as follows:
Internet <--> Cable Modem <--> 192.168.178.0/24 <--> (WiFi) NanoPi m4 (LAN )<--> 172.16.0.0/24 Switch <--> Nodes
NAT Routing works from the Nodes to the Internet.
Docker CA Install according to docker website
Kubernetes Install according to Kubernetes Website
kubeadm init --apiserver-advertise-address=172.16.0.1 --pod-network-cidr=10.244.0.0/16
sysctl net.bridge.bridge-nf-call-iptables=1
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
kubeadm join 172.16.0.1:6443 --token kkc1c7.hm4r65gbyf4g059a --discovery-token-ca-cert-hash sha256:dc04b50bd56251c2c2f2b315ad6dcca9ddf0ffda964e3f3ef63fd6a7d7800a75

2: Virtual Box VMs
Network is similar to above:
Internet <--> Cable Modem <--> 192.168.178.0/24 <--> (WiFi) Laptop <--> (Bridged Network) VBox Master VM <--> 172.16.0.0/24 Virtual Network "k8s" <--> Node VM
Networking on the VMs works as expected. Same steps as above to set up the cluster.

kubectl run alpine --image=alpine -it --restart=Never -- sh
/ # ping google.de
ping: bad address 'google.de'

Ping Test fails for various Cluster IPs shown in `env` as well. Only the gateway IP is pingable
/ # ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
3: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue state UP 
    link/ether 0a:58:0a:f4:01:08 brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.8/24 scope global eth0
       valid_lft forever preferred_lft forever
/ # route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.244.1.1      0.0.0.0         UG    0      0        0 eth0
10.244.0.0      10.244.1.1      255.255.0.0     UG    0      0        0 eth0
10.244.1.0      0.0.0.0         255.255.255.0   U     0      0        0 eth0
/ # ping 10.244.1.1
PING 10.244.1.1 (10.244.1.1): 56 data bytes
64 bytes from 10.244.1.1: seq=0 ttl=64 time=0.193 ms
64 bytes from 10.244.1.1: seq=1 ttl=64 time=0.173 ms
^C
--- 10.244.1.1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.173/0.183/0.193 ms

Anything else we need to know?:

At first I ran into the issue trying to use helm, where helm ls resulted in a timeout. Hence I tried to run the above mentioned alpine pod in the default namespace, as well as in kube-system. Below is some testing where alpine runs in kube-system

kubectl run alpine --image=alpine -it -n kube-system --restart=Never -- sh
If you don't see a command prompt, try pressing enter.

/ # nslookup google.de
nslookup: can't resolve '(null)': Name does not resolve

nslookup: can't resolve 'google.de': Try again
/ # cat /etc/resolv.conf 
nameserver 10.96.0.10
search kube-system.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
/ # env
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT=443
KUBE_DNS_SERVICE_PORT_DNS_TCP=53
HOSTNAME=alpine
TILLER_DEPLOY_SERVICE_HOST=10.103.53.185
SHLVL=1
HOME=/root
KUBE_DNS_SERVICE_HOST=10.96.0.10
TILLER_DEPLOY_SERVICE_PORT=44134
TILLER_DEPLOY_PORT=tcp://10.103.53.185:44134
TILLER_DEPLOY_PORT_44134_TCP_ADDR=10.103.53.185
KUBE_DNS_PORT=udp://10.96.0.10:53
TILLER_DEPLOY_PORT_44134_TCP_PORT=44134
KUBE_DNS_SERVICE_PORT=53
TILLER_DEPLOY_PORT_44134_TCP_PROTO=tcp
TERM=xterm
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
TILLER_DEPLOY_SERVICE_PORT_TILLER=44134
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBE_DNS_PORT_53_TCP_ADDR=10.96.0.10
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBE_DNS_PORT_53_UDP_ADDR=10.96.0.10
TILLER_DEPLOY_PORT_44134_TCP=tcp://10.103.53.185:44134
KUBE_DNS_PORT_53_TCP_PORT=53
KUBE_DNS_PORT_53_TCP_PROTO=tcp
KUBE_DNS_PORT_53_UDP_PORT=53
KUBE_DNS_SERVICE_PORT_DNS=53
KUBE_DNS_PORT_53_UDP_PROTO=udp
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_HOST=10.96.0.1
PWD=/
KUBE_DNS_PORT_53_TCP=tcp://10.96.0.10:53
KUBE_DNS_PORT_53_UDP=udp://10.96.0.10:53
/ # ping 10.96.0.10
PING 10.96.0.10 (10.96.0.10): 56 data bytes
^C
--- 10.96.0.10 ping statistics ---
3 packets transmitted, 0 packets received, 100% packet loss
/ # ping 10.96.0.1
PING 10.96.0.1 (10.96.0.1): 56 data bytes
^C
--- 10.96.0.1 ping statistics ---
11 packets transmitted, 0 packets received, 100% packet loss
/ # ping 10.103.53.185
PING 10.103.53.185 (10.103.53.185): 56 data bytes
^C
--- 10.103.53.185 ping statistics ---
11 packets transmitted, 0 packets received, 100% packet loss

Bringing up a 2nd instance of alpine and trying to ping it:

/ # ping 10.244.1.10
PING 10.244.1.10 (10.244.1.10): 56 data bytes
^C
--- 10.244.1.10 ping statistics ---
5 packets transmitted, 0 packets received, 100% packet loss

Environment:

  • Kubernetes version (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:39:04Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/arm64"}
    Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:31:33Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/arm64"}

  • OS (e.g. from /etc/os-release): NanoPi m4 is Armbian stretch. All others, both physical and VMs are Debian Buster

I hope the information provided is sufficient. If not, please let me know and I will provide additional information. You can also reach me on slack @Vonor to do some live testing.

/kind bug

Vonor commented

/sig network

Vonor commented

Interesting. As far as my understanding goes, all pods should either have a Cluster internal IP (10.x.x.x) or the Internal IP (172.16.0.28) from the host. But some have the hosts external IP (192.168.178.30). Could this already be the issue?

# kubectl get pods -n kube-system -o wide
NAME                           READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
coredns-86c58d9df4-24sxl       1/1     Running   0          13h   10.244.0.13      kube    <none>           <none>
coredns-86c58d9df4-d6dw9       1/1     Running   0          13h   10.244.0.14      kube    <none>           <none>
etcd-kube                      1/1     Running   0          13h   192.168.178.30   kube    <none>           <none>
kube-apiserver-kube            1/1     Running   0          13h   192.168.178.30   kube    <none>           <none>
kube-controller-manager-kube   1/1     Running   4          13h   192.168.178.30   kube    <none>           <none>
kube-flannel-ds-amd64-8wrzs    1/1     Running   0          69m   172.16.0.28      hp-01   <none>           <none>
kube-flannel-ds-arm64-8xgrt    1/1     Running   0          13h   192.168.178.30   kube    <none>           <none>
kube-proxy-6kkmw               1/1     Running   0          69m   172.16.0.28      hp-01   <none>           <none>
kube-proxy-dcjnw               1/1     Running   0          13h   192.168.178.30   kube    <none>           <none>
kube-scheduler-kube            1/1     Running   4          13h   192.168.178.30   kube    <none>           <none>
tiller-deploy-9bf668cf-pxx9n   1/1     Running   0          66m   10.244.1.2       hp-01   <none>           <none>

@Vonor have you tried to reach the internet using an ip address directly instead of a resolving a name first?
Maybe it's a DNS issue.

Vonor commented

I have tried that before too. Just didn't add it to the report here. But here is a full ping test of the network from inside a running pod.

I can already tell, that it doesn't seem to be an issue with flannel. In the above mentioned VM Setup I have tried weavenet as well and got the same result.

root@kube:~# kubectl run alpine --image=alpine -it --restart=Never -- sh
If you don't see a command prompt, try pressing enter.

## External Interface (Wifi) of the NanoPi m4 (hostname "kube")
/ # ping -c1 192.168.178.30
PING 192.168.178.30 (192.168.178.30): 56 data bytes

--- 192.168.178.30 ping statistics ---
1 packets transmitted, 0 packets received, 100% packet loss

## My Laptop
/ # ping -c1 192.168.178.20
PING 192.168.178.20 (192.168.178.20): 56 data bytes

--- 192.168.178.20 ping statistics ---
1 packets transmitted, 0 packets received, 100% packet loss

## Another Device (raspberry) on the network, not part of the cluster
/ # ping -c1 192.168.178.3
PING 192.168.178.3 (192.168.178.3): 56 data bytes

--- 192.168.178.3 ping statistics ---
1 packets transmitted, 0 packets received, 100% packet loss

## The cable modem (fritz box)
/ # ping -c1 192.168.178.1
PING 192.168.178.1 (192.168.178.1): 56 data bytes

--- 192.168.178.1 ping statistics ---
1 packets transmitted, 0 packets received, 100% packet loss

## The IP of the node, onto which the pod is scheduled
/ # ping -c1 172.16.0.28
PING 172.16.0.28 (172.16.0.28): 56 data bytes
64 bytes from 172.16.0.28: seq=0 ttl=64 time=0.508 ms

--- 172.16.0.28 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.508/0.508/0.508 ms

## The Internal (eth0) Interface of the NanoPi m4 (kube, master node)
/ # ping -c1 172.16.0.1
PING 172.16.0.1 (172.16.0.1): 56 data bytes

--- 172.16.0.1 ping statistics ---
1 packets transmitted, 0 packets received, 100% packet loss

## cni0 device on the master node
/ # ping -c1 10.244.0.1
PING 10.244.0.1 (10.244.0.1): 56 data bytes

--- 10.244.0.1 ping statistics ---
1 packets transmitted, 0 packets received, 100% packet loss

## cni0 on the node
/ # ping -c1 10.244.1.1
PING 10.244.1.1 (10.244.1.1): 56 data bytes
64 bytes from 10.244.1.1: seq=0 ttl=64 time=0.540 ms

--- 10.244.1.1 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.540/0.540/0.540 ms

## And last but not least, Googles Name Server. Obviously failing, since pinging the cable modem already doesn't work
/ # ping -c1 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes

--- 8.8.8.8 ping statistics ---
1 packets transmitted, 0 packets received, 100% packet loss
/ # 

And what I find pretty strange is, that even though I used --apiserver-advertize-address the Master node seems to have the wrong IP?

kubeadm init --apiserver-advertise-address=172.16.0.1 --pod-network-cidr=10.244.0.0/16

kubectl get nodes -o wide
NAME    STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                       KERNEL-VERSION   CONTAINER-RUNTIME
hp-01   Ready    <none>   25h   v1.13.1   172.16.0.28      <none>        Debian GNU/Linux buster/sid    4.18.0-3-amd64   docker://18.9.0
kube    Ready    master   37h   v1.13.1   192.168.178.30   <none>        Debian GNU/Linux 9 (stretch)   4.19.0-rk3399    docker://18.9.0
Vonor commented

Some more info...

I set up two ubuntu 18.04 VMs
On that note. I have tried the same network setup with debian and it didn't work

Internet <--> Cable Modem <--> 192.168.178.0/24 <--> (WiFi) Laptop <--> (Bridged Network) VBox Master VM / Node VM

So both the master and the node are in the 192.168.178.0/24 network.

Docker setup according to the docker-ce guide for ubuntu
Setup Kubernetes according to the install kubeadm and create a single master cluster docs

So same procedure as on debian. Only the docker package is specifically for ubuntu and not for debian.
Flannel was applied

The result looks good.

root@k8s-master:~# kubectl run alpine --image=alpine -it --restart=Never -- sh
If you don't see a command prompt, try pressing enter.
/ # ping google.de
PING google.de (172.217.16.195): 56 data bytes
64 bytes from 172.217.16.195: seq=0 ttl=53 time=16.060 ms
64 bytes from 172.217.16.195: seq=1 ttl=53 time=14.202 ms
64 bytes from 172.217.16.195: seq=2 ttl=53 time=15.890 ms
^C
--- google.de ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 14.202/15.384/16.060 ms

root@k8s-master:~# kubectl get nodes -o wide
NAME         STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
k8s-master   Ready    master   28m   v1.13.1   192.168.178.27   <none>        Ubuntu 18.04.1 LTS   4.15.0-43-generic   docker://18.6.1
k8s-node     Ready    <none>   21m   v1.13.1   192.168.178.29   <none>        Ubuntu 18.04.1 LTS   4.15.0-43-generic   docker://18.6.1

@Vonor what happens if you try to do the exact same steps on Debian now?
It seems that when you tried on debian you where assigning different subnets to nodes with different debian versions or maybe that was intended?

Vonor commented

@fntlnz, as written in my previous post, I did exactly that before. With Debian Buster.
However, I just tried again with Debian Stretch.
Same Network Setup on the VMs as with the above Ubuntu Setup
Debian Stretch Netinstall ISO. At Tasksel I selected only ssh Server and Base System
Install Docker according to the Docker Install Guide for Debian. Only difference, is that I used 18.06, as on Ubuntu.
apt install docker-ce=18.06.1~ce~3-0~debian

Install Kubernetes according to the docs.

Add br_netfilter to the autoload modules. Added net.bridge.bridge-nf-call-iptables=1 to sysctl.conf.
Cloned the VM to have Master and Node.

On Master

root@k8s-master:~# kubeadm init --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.178.23 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.178.23 127.0.0.1 ::1]
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.178.23]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 25.023245 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: kx2wit.dejqyivi1vnabk30
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.178.23:6443 --token kx2wit.dejqyivi1vnabk30 --discovery-token-ca-cert-hash sha256:92134ff0aa19a7af943f9201753ee04338c4ba5da2897411b265b05280550f11

root@k8s-master:~# mkdir .kube
root@k8s-master:~# ln -s /etc/kubernetes/admin.conf .kube/config
root@k8s-master:~# deb https://apt.kubernetes.io/ kubernetes-xenial main^C
root@k8s-master:~# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

On the node

root@k8s-node:~# kubeadm join 192.168.178.23:6443 --token kx2wit.dejqyivi1vnabk30 --discovery-token-ca-cert-hash sha256:92134ff0aa19a7af943f9201753ee04338c4ba5da2897411b265b05280550f11
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "192.168.178.23:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.178.23:6443"
[discovery] Requesting info from "https://192.168.178.23:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.178.23:6443"
[discovery] Successfully established connection with API Server "192.168.178.23:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-node" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

root@k8s-node:~# 

On Master

root@k8s-master:~# kubectl get nodes -o wide
NAME         STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                       KERNEL-VERSION   CONTAINER-RUNTIME
k8s-master   Ready    master   16m   v1.13.1   192.168.178.23   <none>        Debian GNU/Linux 9 (stretch)   4.9.0-8-amd64    docker://18.6.1
k8s-node     Ready    <none>   11m   v1.13.1   192.168.178.25   <none>        Debian GNU/Linux 9 (stretch)   4.9.0-8-amd64    docker://18.6.1
root@k8s-master:~# kubectl run alpine --image=alpine -it --restart=Never -- sh
If you don't see a command prompt, try pressing enter.
/ # ping google.de -c2
ping: bad address 'google.de'
/ # exit
pod default/alpine terminated (Error)
root@k8s-master:~# 
root@k8s-master:~# kubectl get all --all-namespaces -o wide
NAMESPACE     NAME                                     READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
default       pod/alpine                               0/1     Error     0          76s   10.244.1.3       k8s-node     <none>           <none>
kube-system   pod/coredns-86c58d9df4-hrhwd             1/1     Running   0          18m   10.244.0.2       k8s-master   <none>           <none>
kube-system   pod/coredns-86c58d9df4-jqnr4             1/1     Running   0          18m   10.244.0.3       k8s-master   <none>           <none>
kube-system   pod/etcd-k8s-master                      1/1     Running   0          15m   192.168.178.23   k8s-master   <none>           <none>
kube-system   pod/kube-apiserver-k8s-master            1/1     Running   0          15m   192.168.178.23   k8s-master   <none>           <none>
kube-system   pod/kube-controller-manager-k8s-master   1/1     Running   0          15m   192.168.178.23   k8s-master   <none>           <none>
kube-system   pod/kube-flannel-ds-amd64-k4hqh          1/1     Running   0          13m   192.168.178.25   k8s-node     <none>           <none>
kube-system   pod/kube-flannel-ds-amd64-wzsfb          1/1     Running   0          17m   192.168.178.23   k8s-master   <none>           <none>
kube-system   pod/kube-proxy-9thlx                     1/1     Running   0          13m   192.168.178.25   k8s-node     <none>           <none>
kube-system   pod/kube-proxy-hnktz                     1/1     Running   0          18m   192.168.178.23   k8s-master   <none>           <none>
kube-system   pod/kube-scheduler-k8s-master            1/1     Running   0          15m   192.168.178.23   k8s-master   <none>           <none>

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE   SELECTOR
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP         18m   <none>
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP   18m   k8s-app=kube-dns

NAMESPACE     NAME                                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                     AGE   CONTAINERS     IMAGES                                   SELECTOR
kube-system   daemonset.apps/kube-flannel-ds-amd64     2         2         2       2            2           beta.kubernetes.io/arch=amd64     17m   kube-flannel   quay.io/coreos/flannel:v0.10.0-amd64     app=flannel,tier=node
kube-system   daemonset.apps/kube-flannel-ds-arm       0         0         0       0            0           beta.kubernetes.io/arch=arm       17m   kube-flannel   quay.io/coreos/flannel:v0.10.0-arm       app=flannel,tier=node
kube-system   daemonset.apps/kube-flannel-ds-arm64     0         0         0       0            0           beta.kubernetes.io/arch=arm64     17m   kube-flannel   quay.io/coreos/flannel:v0.10.0-arm64     app=flannel,tier=node
kube-system   daemonset.apps/kube-flannel-ds-ppc64le   0         0         0       0            0           beta.kubernetes.io/arch=ppc64le   17m   kube-flannel   quay.io/coreos/flannel:v0.10.0-ppc64le   app=flannel,tier=node
kube-system   daemonset.apps/kube-flannel-ds-s390x     0         0         0       0            0           beta.kubernetes.io/arch=s390x     17m   kube-flannel   quay.io/coreos/flannel:v0.10.0-s390x     app=flannel,tier=node
kube-system   daemonset.apps/kube-proxy                2         2         2       2            2           <none>                            18m   kube-proxy     k8s.gcr.io/kube-proxy:v1.13.1            k8s-app=kube-proxy

NAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                     SELECTOR
kube-system   deployment.apps/coredns   2/2     2            2           18m   coredns      k8s.gcr.io/coredns:1.2.6   k8s-app=kube-dns

NAMESPACE     NAME                                 DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES                     SELECTOR
kube-system   replicaset.apps/coredns-86c58d9df4   2         2         2       18m   coredns      k8s.gcr.io/coredns:1.2.6   k8s-app=kube-dns,pod-template-hash=86c58d9df4
root@k8s-master:~# 
Vonor commented

I came up with the idea that iptables might be the issue. The idea came, since on Debian Buster there is iptables-legacy, which is not on ubuntu. So after a kubeadm reset I had to flush the netfilter tables as well as the legacy tables to get rid of everything.

So googling for "iptables kubernetes" I found this article on the oracle docs.

Just giving it a shot on the above created Debian Stretch VMs turned out, that the node must have the :FORWARD ACCEPT [0:0] rule.
I applied it to my Arm Cluster as well and evoila, both pinging the outside Network as well as helm ls work now.

So the question is, why on Ubuntu that rule is applied by Kubernetes and on Debian it isn't?

Vonor commented

Ok, seems like on Ubuntu the Forward Rule isn't enabled either. Further digging brings me back to my initial thought regarding legacy tables.

On Ubuntu

root@k8s-node:~# iptables-save 
# Generated by iptables-save v1.6.1 on Sun Dec 30 01:57:53 2018
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [1:60]
:POSTROUTING ACCEPT [1:60]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-DQAVQW32ZU6GGKNL - [0:0]
:KUBE-SEP-LASJGFFJP3UOS6RQ - [0:0]
:KUBE-SEP-LPGSDLJ3FDW46N4W - [0:0]
:KUBE-SEP-RPTYXHC626XLD5T3 - [0:0]
:KUBE-SEP-SF3LG62VAE5ALYDV - [0:0]
:KUBE-SEP-WXWGHGKZOCNYRYI7 - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-K7J76NXP7AUZVFGS - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
-A POSTROUTING -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
-A POSTROUTING ! -s 10.244.0.0/16 -d 10.244.1.0/24 -j RETURN
-A POSTROUTING ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-DQAVQW32ZU6GGKNL -s 10.244.1.5/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-DQAVQW32ZU6GGKNL -p tcp -m tcp -j DNAT --to-destination 10.244.1.5:44134
-A KUBE-SEP-LASJGFFJP3UOS6RQ -s 10.244.0.5/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-LASJGFFJP3UOS6RQ -p tcp -m tcp -j DNAT --to-destination 10.244.0.5:53
-A KUBE-SEP-LPGSDLJ3FDW46N4W -s 10.244.0.5/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-LPGSDLJ3FDW46N4W -p udp -m udp -j DNAT --to-destination 10.244.0.5:53
-A KUBE-SEP-RPTYXHC626XLD5T3 -s 192.168.178.27/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-RPTYXHC626XLD5T3 -p tcp -m tcp -j DNAT --to-destination 192.168.178.27:6443
-A KUBE-SEP-SF3LG62VAE5ALYDV -s 10.244.0.4/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-SF3LG62VAE5ALYDV -p tcp -m tcp -j DNAT --to-destination 10.244.0.4:53
-A KUBE-SEP-WXWGHGKZOCNYRYI7 -s 10.244.0.4/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-WXWGHGKZOCNYRYI7 -p udp -m udp -j DNAT --to-destination 10.244.0.4:53
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.106.20.49/32 -p tcp -m comment --comment "kube-system/tiller-deploy:tiller cluster IP" -m tcp --dport 44134 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.106.20.49/32 -p tcp -m comment --comment "kube-system/tiller-deploy:tiller cluster IP" -m tcp --dport 44134 -j KUBE-SVC-K7J76NXP7AUZVFGS
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-SF3LG62VAE5ALYDV
-A KUBE-SVC-ERIFXISQEP7F7OF4 -j KUBE-SEP-LASJGFFJP3UOS6RQ
-A KUBE-SVC-K7J76NXP7AUZVFGS -j KUBE-SEP-DQAVQW32ZU6GGKNL
-A KUBE-SVC-NPX46M4PTMTKRN6Y -j KUBE-SEP-RPTYXHC626XLD5T3
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-WXWGHGKZOCNYRYI7
-A KUBE-SVC-TCOU7JCQXEZGVUNU -j KUBE-SEP-LPGSDLJ3FDW46N4W
COMMIT
# Completed on Sun Dec 30 01:57:53 2018
# Generated by iptables-save v1.6.1 on Sun Dec 30 01:57:53 2018
*filter
:INPUT ACCEPT [184:65153]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [156:20511]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -s 10.244.0.0/16 -j ACCEPT
-A FORWARD -d 10.244.0.0/16 -j ACCEPT
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -s 10.244.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -d 10.244.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
COMMIT
# Completed on Sun Dec 30 01:57:53 2018
root@k8s-node:~# 

On Debian (Real Host)

root@hp-01:~# iptables-save
# Generated by xtables-save v1.8.2 on Sun Dec 30 01:59:40 2018
*filter
:INPUT ACCEPT [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:KUBE-FIREWALL - [0:0]
-A INPUT -j KUBE-FIREWALL
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A KUBE-FIREWALL -m mark --mark 0x8000/0x8000 -m comment --comment "kubernetes firewall for dropping marked packets" -j DROP
-A KUBE-FIREWALL -m mark --mark 0x8000/0x8000 -m comment --comment "kubernetes firewall for dropping marked packets" -j DROP
-A KUBE-FIREWALL -m mark --mark 0x8000/0x8000 -m comment --comment "kubernetes firewall for dropping marked packets" -j DROP
-A KUBE-FIREWALL -m mark --mark 0x8000/0x8000 -m comment --comment "kubernetes firewall for dropping marked packets" -j DROP
-A KUBE-FIREWALL -m mark --mark 0x8000/0x8000 -m comment --comment "kubernetes firewall for dropping marked packets" -j DROP
-A KUBE-FIREWALL -m mark --mark 0x8000/0x8000 -m comment --comment "kubernetes firewall for dropping marked packets" -j DROP
-A KUBE-FIREWALL -m mark --mark 0x8000/0x8000 -m comment --comment "kubernetes firewall for dropping marked packets" -j DROP
-A KUBE-FIREWALL -m mark --mark 0x8000/0x8000 -m comment --comment "kubernetes firewall for dropping marked packets" -j DROP
-A KUBE-FIREWALL -m mark --mark 0x8000/0x8000 -m comment --comment "kubernetes firewall for dropping marked packets" -j DROP
-A KUBE-FIREWALL -m mark --mark 0x8000/0x8000 -m comment --comment "kubernetes firewall for dropping marked packets" -j DROP
-A KUBE-FIREWALL -m mark --mark 0x8000/0x8000 -m comment --comment "kubernetes firewall for dropping marked packets" -j DROP
-A KUBE-FIREWALL -m mark --mark 0x8000/0x8000 -m comment --comment "kubernetes firewall for dropping marked packets" -j DROP
COMMIT
# Completed on Sun Dec 30 01:59:40 2018
# Generated by xtables-save v1.8.2 on Sun Dec 30 01:59:40 2018
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-POSTROUTING - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark --mark 0x4000/0x4000 -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE
-A KUBE-POSTROUTING -m mark --mark 0x4000/0x4000 -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE
-A KUBE-POSTROUTING -m mark --mark 0x4000/0x4000 -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE
-A KUBE-POSTROUTING -m mark --mark 0x4000/0x4000 -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE
-A KUBE-POSTROUTING -m mark --mark 0x4000/0x4000 -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE
-A KUBE-POSTROUTING -m mark --mark 0x4000/0x4000 -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE
-A KUBE-POSTROUTING -m mark --mark 0x4000/0x4000 -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE
-A KUBE-POSTROUTING -m mark --mark 0x4000/0x4000 -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE
-A KUBE-POSTROUTING -m mark --mark 0x4000/0x4000 -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE
-A KUBE-POSTROUTING -m mark --mark 0x4000/0x4000 -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE
-A KUBE-POSTROUTING -m mark --mark 0x4000/0x4000 -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE
-A KUBE-POSTROUTING -m mark --mark 0x4000/0x4000 -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE
COMMIT
# Completed on Sun Dec 30 01:59:40 2018
# Warning: iptables-legacy tables present, use iptables-legacy-save to see them


root@hp-01:~# iptables-legacy-save
# Generated by iptables-save v1.8.2 on Sun Dec 30 01:59:48 2018
*filter
:INPUT ACCEPT [95:42183]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [85:12483]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -s 10.244.0.0/16 -j ACCEPT
-A FORWARD -d 10.244.0.0/16 -j ACCEPT
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -s 10.244.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -d 10.244.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
COMMIT
# Completed on Sun Dec 30 01:59:48 2018
# Generated by iptables-save v1.8.2 on Sun Dec 30 01:59:48 2018
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-36PH6GXFEN4T4CYJ - [0:0]
:KUBE-SEP-6YRNBWJNCIVVR4LN - [0:0]
:KUBE-SEP-AAJNJKLUKIUTGBWJ - [0:0]
:KUBE-SEP-CWXUMBMH75C7ZUMU - [0:0]
:KUBE-SEP-ZIMN66YWSS3KERKE - [0:0]
:KUBE-SEP-ZKAJ6VCRKDKOPZJN - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-K7J76NXP7AUZVFGS - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
-A POSTROUTING -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
-A POSTROUTING ! -s 10.244.0.0/16 -d 10.244.1.0/24 -j RETURN
-A POSTROUTING ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-36PH6GXFEN4T4CYJ -s 10.244.0.13/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-36PH6GXFEN4T4CYJ -p tcp -m tcp -j DNAT --to-destination 10.244.0.13:53
-A KUBE-SEP-6YRNBWJNCIVVR4LN -s 10.244.0.14/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-6YRNBWJNCIVVR4LN -p udp -m udp -j DNAT --to-destination 10.244.0.14:53
-A KUBE-SEP-AAJNJKLUKIUTGBWJ -s 10.244.0.14/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-AAJNJKLUKIUTGBWJ -p tcp -m tcp -j DNAT --to-destination 10.244.0.14:53
-A KUBE-SEP-CWXUMBMH75C7ZUMU -s 10.244.0.13/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-CWXUMBMH75C7ZUMU -p udp -m udp -j DNAT --to-destination 10.244.0.13:53
-A KUBE-SEP-ZIMN66YWSS3KERKE -s 172.16.0.1/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-ZIMN66YWSS3KERKE -p tcp -m tcp -j DNAT --to-destination 172.16.0.1:6443
-A KUBE-SEP-ZKAJ6VCRKDKOPZJN -s 10.244.1.15/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-ZKAJ6VCRKDKOPZJN -p tcp -m tcp -j DNAT --to-destination 10.244.1.15:44134
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.103.248.29/32 -p tcp -m comment --comment "kube-system/tiller-deploy:tiller cluster IP" -m tcp --dport 44134 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.103.248.29/32 -p tcp -m comment --comment "kube-system/tiller-deploy:tiller cluster IP" -m tcp --dport 44134 -j KUBE-SVC-K7J76NXP7AUZVFGS
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-36PH6GXFEN4T4CYJ
-A KUBE-SVC-ERIFXISQEP7F7OF4 -j KUBE-SEP-AAJNJKLUKIUTGBWJ
-A KUBE-SVC-K7J76NXP7AUZVFGS -j KUBE-SEP-ZKAJ6VCRKDKOPZJN
-A KUBE-SVC-NPX46M4PTMTKRN6Y -j KUBE-SEP-ZIMN66YWSS3KERKE
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-CWXUMBMH75C7ZUMU
-A KUBE-SVC-TCOU7JCQXEZGVUNU -j KUBE-SEP-6YRNBWJNCIVVR4LN
COMMIT
# Completed on Sun Dec 30 01:59:48 2018
root@hp-01:~# 

Oh right @Vonor you have a machine with Debian buster, I think you are hitting the exact same issue described in #71305

Vonor commented

It indeed seems to be the same issue. So let's close here and keep it in #71305

/close

@Vonor: Closing this issue.

In response to this:

It indeed seems to be the same issue. So let's close here and keep it in #71305

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Try to Check the Kube-flannel.yml file and also the starting command to create the cluster that is kubeadm init --pod-network-cidr=10.244.0.0/16 and by default in this file kube-flannel.yml you will get the 10.244.0.0/16 IP, so if you want to change the pod-network-CIDR then please change in the file also.

Hi everyone,
I have a K8s setup, everything is working fine for me.
When I ping the pod service IP. there is response like below.

_Request timeout for icmp_seq 1
92 bytes from 10.20.60.15: Redirect Host(New addr: 10.20.60.103)
Vr HL TOS Len ID Flg off TTL Pro cks Src Dst
4 5 00 0054 5d0d 0 0000 3f 01 ba9e 10.20.19.111 10.20.60.103

But at the same time, I am able to ssh and also take it remote access via XRDP. I am using MetalLB for load balancing. This config is running for a year with K8s v1.19*. Can some one help me fix that ping issue.

Thanks in advance.
Mark.

Hi,

Scenario:

After successfully installing rancher we noticed some issues like inter-communication between services/pods were not happening, then we tested with the sample deployment of helloworld with NodePort service { FYI, Exposes the Service on each Node's IP at a static port }

let's assume the port exposed was 31150 and the pod was running on worker2.. so, ideally we should be able to telnet worker1 31150, worker2 31150 and worker3 31150 but telnet was happening only on worker2 where the pod was running and it was quite strange behaviour.

Env Details:
Docker : v1.20.7
Rancher: v2.7.1
RKE: v1.24.15
OS: RHEL 8.6
Cloud: VMware

Solution:

We dug deeper and made many changes required to solve this issue.. At last the final resolution turned out to be the bug in the linux VMware Module for its NIC and the kernel in RHEL 8.x.


vmware kernel module for network card =>
[root@kubermw3 ~]# lsmod | grep vmx
vmxnet3 65536 0

By default, 'tx-checksum-ip-generic' is on and this was causing the issue


[root@kubermw2 ]# ethtool -k ens192 |egrep 'tx-checksum-ip-generic'
tx-checksum-ip-generic: on

disabled on all nodes, using this command:


[root@kubermw2 ]# ethtool -K ens192 tx-checksum-ip-generic off
Actual changes:
tx-checksum-ip-generic: off
tx-tcp-segmentation: off [not requested]
tx-tcp6-segmentation: off [not requested]

and to make above solution boot persistent,

nmcli con modify ens192 ethtool.feature-tx-checksum-ip-generic off

After these changes, Everything went back to normal.

Regards,
Aditya