Support tunneling over IPv6
AkkyOrz opened this issue · 6 comments
Bug report
Hello. First of all, thank you for an amazing product.
I am trying to configure an IPv6 only kubernetes network(, which means that all the interfaces on my node are not given ipv4 addresses except for the loopback interface.)
Under such circumstances, I tried to introduce cilium based on these articles.
- Installation using Helm — Cilium 1.10.3 documentation
- CRD-backed by Cilium cluster-pool IPAM — Cilium 1.10.3 documentation
However, I got the following error.
...
level=fatal msg="postinit failed" error="external IPv4 node address could not be derived, please configure via --ipv4-node" subsys=daemon
I had assumed that I would be tunneling IPv6 packets with IPv6 packets.
However, from what I see in this code, it seems to assume that an IPv4 interface is required.
Lines 453 to 457 in a8e3fa2
Is this protected from being configured by this code because there is no implementation that encapsulates it in ipv6?
Or, does the ipv6 encapsulation function exist, but the error is caused by a bug in the implementation?
If the latter is the case, I would like to see this conditional branch modified if it would allow tunneling with ipv6.
Please let me know if there is any information you need.
Thanks.
General Information
- Cilium version (
cilium version
=>v1.10.3
) - Kernel version (
uname -a
=>Linux <hostname> 5.4.0-81-generic #91-Ubuntu SMP Thu Jul 15 19:09:17 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
) - Orchestration system version in use(
kubectl version
=>Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"clean", BuildDate:"2021-05-12T14:18:45Z", GoVersion:"go1.16.4", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.4", GitCommit:"3cce4a82b44f032d0cd1a1790e6d2f5a55d20aae", GitTreeState:"clean", BuildDate:"2021-08-11T18:10:22Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}
)
install-cilium-with-helm.yaml
( this file is used with this command helm install cilium cilium/cilium --namespace kube-system -f install-cilium-with-helm.yaml
)
ipv4:
enabled: false
ipv6:
enabled: true
ipam:
# -- Configure IP Address Management mode.
# ref: https://docs.cilium.io/en/stable/concepts/networking/ipam/
mode: "cluster-pool"
operator:
# -- IPv6 CIDR range to delegate to individual nodes for IPAM.
clusterPoolIPv6PodCIDR: "fddd::/104"
# -- IPv6 CIDR mask size to delegate to individual nodes for IPAM.
clusterPoolIPv6MaskSize: 120
How to reproduce the issue
sudo kubeadm init --config=<my config>
helm repo add cilium https://helm.cilium.io/
helm install cilium cilium/cilium --namespace kube-system -f install-cilium-with-helm.yaml
kubectl -n kube-system logs cilium-xxxxx
(zsh)% kk get cn -oyaml
apiVersion: v1
items:
- apiVersion: cilium.io/v2
kind: CiliumNode
metadata:
creationTimestamp: "2021-08-25T06:14:51Z"
generation: 143
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/os: linux
kubernetes.io/arch: amd64
kubernetes.io/hostname: <node-name>
kubernetes.io/os: linux
node-role.kubernetes.io/control-plane: ""
node-role.kubernetes.io/master: ""
node.kubernetes.io/exclude-from-external-load-balancers: ""
name: <node-name>
ownerReferences:
- apiVersion: v1
kind: Node
name: <node-name>
uid: 0b1c7234-dd62-4bf7-a5bd-1ef4fe449d55
resourceVersion: "40471"
uid: c2df8402-b0b9-4004-ac60-e38fade548a1
spec:
addresses:
- ip: 2001:200:e00:b11::1000
type: InternalIP
- ip: fddd::d7
type: CiliumInternalIP
alibaba-cloud: {}
azure: {}
encryption: {}
eni: {}
health:
ipv6: fddd::9a
ipam:
podCIDRs:
- fddd::/120 # it seems to be successful to enable ipam with CDR
status:
alibaba-cloud: {}
azure: {}
eni: {}
ipam:
operator-status: {}
kind: List
metadata:
resourceVersion: ""
selfLink: ""
(zsh)% kubectl -n kube-system logs cilium-xxxxx
...
level=info msg="Initializing node addressing" subsys=daemon
level=info msg="Initializing cluster-pool IPAM" subsys=ipam v4Prefix="<nil>" v6Prefix="fddd::/120"
level=info msg="Restoring endpoints..." subsys=daemon
level=info msg="Endpoints restored" failed=0 restored=0 subsys=daemon
level=info msg="Addressing information:" subsys=daemon
level=info msg=" Cluster-Name: default" subsys=daemon
level=info msg=" Cluster-ID: 0" subsys=daemon
level=info msg=" Local node-name: <node-name>" subsys=daemon
level=info msg=" Node-IPv6: 2001:200:e00:b11::1000" subsys=daemon
level=info msg=" IPv6 allocation prefix: fddd::/120" subsys=daemon
level=info msg=" IPv6 router address: fddd::d7" subsys=daemon
level=info msg=" Local IPv6 addresses:" subsys=daemon
level=info msg=" - 2001:200:e00:b11:250:56ff:fe9c:735c" subsys=daemon
level=info msg=" - 2001:200:e00:b11::1000" subsys=daemon
level=info msg=" - 2001:200:e00:b11::1000" subsys=daemon
level=info msg=" - fe80::2c5e:beff:fe45:8fbb" subsys=daemon
level=info msg=" External-Node IPv4: <nil>" subsys=daemon
level=info msg=" Internal-Node IPv4: <nil>" subsys=daemon
level=info msg="Creating or updating CiliumNode resource" node=<node-name> subsys=nodediscovery
level=info msg="Adding local node to cluster" node="{<node-name> default [{InternalIP 2001:200:e00:b11::1000} {CiliumInternalIP fddd::d7}] <nil> fddd::/120 <nil> fddd::e5 0 local 0 map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:<node-name>kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] 6 }" subsys=nodediscovery
level=info msg="Annotating k8s node" subsys=daemon v4CiliumHostIP.IPv4="<nil>" v4Prefix="<nil>" v4healthIP.IPv4="<nil>" v6CiliumHostIP.IPv6="fddd::d7" v6Prefix="fddd::/120" v6healthIP.IPv6="fddd::e5"
level=info msg="Initializing identity allocator" subsys=identity-cache
level=info msg="Cluster-ID is not specified, skipping ClusterMesh initialization" subsys=daemon
level=info msg="Setting up BPF datapath" bpfClockSource=ktime bpfInsnSet=v2 subsys=datapath-loader
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.core.bpf_jit_enable sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.all.rp_filter sysParamValue=0
level=info msg="Setting sysctl" subsys=sysctl sysParamName=kernel.unprivileged_bpf_disabled sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=kernel.timer_migration sysParamValue=0
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv6.conf.all.disable_ipv6 sysParamValue=0
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv6.conf.cilium_host.forwarding sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv6.conf.cilium_net.forwarding sysParamValue=1
level=info msg="All pre-existing resources related to policy have been received; continuing" subsys=k8s-watcher
level=info msg="Adding new proxy port rules for cilium-dns-egress:44169" proxy port name=cilium-dns-egress subsys=proxy
level=info msg="Serving cilium node monitor v1.2 API at unix:///var/run/cilium/monitor1_2.sock" subsys=monitor-agent
level=info msg="Validating configured node address ranges" subsys=daemon
level=fatal msg="postinit failed" error="external IPv4 node address could not be derived, please configure via --ipv4-node" subsys=daemon
This is a known limitation
I'm sorry for the hassle it caused due to my lack of research.
I didn't know that, so it was very helpful.
Thank you for your reply!
Hello, I'm running 1.11 and still getting this error. Any update?
The fix didn't make it into v1.11. We had to prioritize other IPv6-related improvements instead. We are still planning to fix this.
Any update? Is this planned for v1.13?
IPv6 masquerading in tunneling mode with eBPF will unlock a lot of implementations that do not have support for GUA IPv6 addresses.
Putting me in the loop to keep me updated on this as well. My IPv6 only test-deployment also want's to have an external IPv4 address and refuses to install :/