kubernetes/kubernetes

kube-proxy does not work on Debian Buster

Closed this issue · 2 comments

What happened:
Installed a new node (joining existing cluster) with Kubernetes v1.13.4 using kubeadm and containerd v1.2.4 as the container runtime. kube-proxy starts but is just repeatedly trying and failing to set up iptables rules. It seems like generated rules are incorrect, see logs down below.

What you expected to happen:
kube-proxy starts normally.

How to reproduce it (as minimally and precisely as possible):
Start kube-proxy on Debian 10 (Buster).

Anything else we need to know?:

kube-proxy log
> kubectl logs kube-proxy-drnk7
W0315 17:21:28.436027       1 feature_gate.go:198] Setting GA feature gate SupportIPVSProxyMode=true. It will be removed in a future release.
I0315 17:21:28.615958       1 server_others.go:189] Using ipvs Proxier.
W0315 17:21:28.616937       1 proxier.go:381] IPVS scheduler not specified, use rr by default
I0315 17:21:28.617051       1 server_others.go:216] Tearing down inactive rules.
I0315 17:21:28.676938       1 server.go:483] Version: v1.13.4
I0315 17:21:28.686345       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0315 17:21:28.686936       1 config.go:102] Starting endpoints config controller
I0315 17:21:28.686947       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0315 17:21:28.686969       1 config.go:202] Starting service config controller
I0315 17:21:28.686975       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0315 17:21:28.787163       1 controller_utils.go:1034] Caches are synced for endpoints config controller
I0315 17:21:28.787271       1 controller_utils.go:1034] Caches are synced for service config controller
E0315 17:21:28.936603       1 proxier.go:1195] Failed to execute iptables-restore: exit status 2 (iptables-restore v1.6.0: Couldn't load target `KUBE-MARK-DROP':No such file or directory

Error occurred at line: 14
Try `iptables-restore -h' or 'iptables-restore --help' for more information.
)
Rules:
*nat
:KUBE-POSTROUTING - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-POSTROUTING - [0:0]
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x00004000/0x00004000 -j MASQUERADE
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x00004000/0x00004000
-A KUBE-NODE-PORT -p tcp -m comment --comment "Kubernetes nodeport TCP port for masquerade purpose" -m set --match-set KUBE-NODE-PORT-TCP dst -j KUBE-MARK-MASQ
-A KUBE-SERVICES -m comment --comment "Kubernetes service cluster ip + port for masquerade purpose" -m set --match-set KUBE-CLUSTER-IP dst,dst ! -s 10.244.0.0/16 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -m addrtype --dst-type LOCAL -j KUBE-NODE-PORT
-A KUBE-LOAD-BALANCER -j KUBE-MARK-MASQ
-A KUBE-FIREWALL -j KUBE-MARK-DROP
-A KUBE-SERVICES -m set --match-set KUBE-CLUSTER-IP dst,dst -j ACCEPT
COMMIT
*filter
:KUBE-FORWARD - [0:0]
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x00004000/0x00004000 -j ACCEPT
-A KUBE-FORWARD -s 10.244.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod destination rule" -d 10.244.0.0/16 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
COMMIT
E0315 17:21:57.908456       1 proxier.go:1195] Failed to execute iptables-restore: exit status 2 (iptables-restore v1.6.0: Couldn't load target `KUBE-MARK-DROP':No such file or directory

Error occurred at line: 14
Try `iptables-restore -h' or 'iptables-restore --help' for more information.
)
Rules:
*nat
:KUBE-SERVICES - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-NODE-PORT - [0:0]
:KUBE-LOAD-BALANCER - [0:0]
:KUBE-MARK-MASQ - [0:0]
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x00004000/0x00004000 -j MASQUERADE
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x00004000/0x00004000
-A KUBE-NODE-PORT -p tcp -m comment --comment "Kubernetes nodeport TCP port for masquerade purpose" -m set --match-set KUBE-NODE-PORT-TCP dst -j KUBE-MARK-MASQ
-A KUBE-SERVICES -m comment --comment "Kubernetes service cluster ip + port for masquerade purpose" -m set --match-set KUBE-CLUSTER-IP dst,dst ! -s 10.244.0.0/16 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -m addrtype --dst-type LOCAL -j KUBE-NODE-PORT
-A KUBE-LOAD-BALANCER -j KUBE-MARK-MASQ
-A KUBE-FIREWALL -j KUBE-MARK-DROP
-A KUBE-SERVICES -m set --match-set KUBE-CLUSTER-IP dst,dst -j ACCEPT
COMMIT
*filter
:KUBE-FORWARD - [0:0]
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x00004000/0x00004000 -j ACCEPT
-A KUBE-FORWARD -s 10.244.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod destination rule" -d 10.244.0.0/16 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
COMMIT
.. snip, repeated failure follows .. 

Might be related to #71305; Debian Buster comes with iptables 1.8.2.

Environment:

  • Kubernetes version: v1.13.4
  • Cloud provider or hardware configuration: VMware vSphere
  • OS: Debian 10 (buster)
  • Kernel: Linux 4.19.0-2-amd64 #1 SMP Debian 4.19.16-1 (2019-01-17) x86_64 GNU/Linux
  • Install tools: kubeadm
  • Others: Containerd v1.2.4

/sig network

@praseodym as you pointed out this is related to #71305 the only differences are the kubernetes version and container runtime, here you are using containerd while the other is using docker.

I think that this can be closed and the new findings contributed to that issue.

Closing in favour of #71305.