aws/amazon-vpc-cni-k8s

IPAMD logs IPv6 address as an IPv4 address in log message for DelNetworkReply

zachdorame opened this issue · 3 comments

What happened:

For a cluster in IPv6 mode, when a pod is deleted, a log like the following is printed

{"level":"info","ts":"2024-01-04T19:49:41.425Z","caller":"rpc/rpc.pb.go:731","msg":"Send DelNetworkReply: IPv4Addr 2600:1f14:2868:6d00:f6d9::1, DeviceNumber: 0, err: <nil>"}

2600:1f14:2868:6d00:f6d9::1 is logged as an IPv4 address but is an IPv6 address

Attach logs

The following logs include the ADD event and corresponding DEL event

{"level":"info","ts":"2024-01-04T19:43:15.193Z","caller":"rpc/rpc.pb.go:713","msg":"Send AddNetworkReply: IPv4Addr , IPv6Addr: 2600:1f14:2868:6d00:f6d9::, DeviceNumber: 0, err: <nil>"}
{"level":"info","ts":"2024-01-04T19:49:30.622Z","caller":"rpc/rpc.pb.go:713","msg":"Received AddNetwork for NS /var/run/netns/cni-14347753-9458-72eb-481b-2c6eaea963f0, Sandbox 024964b7fe579b000c3ed600b3955c82b89c48df7389827f40d7ca271b2aa291, ifname eth0"}
{"level":"debug","ts":"2024-01-04T19:49:30.622Z","caller":"rpc/rpc.pb.go:713","msg":"AddNetworkRequest: K8S_POD_NAME:\"nginx-deployment-79b55879bb-tx7r2\"  K8S_POD_NAMESPACE:\"default\"  K8S_POD_INFRA_CONTAINER_ID:\"024964b7fe579b000c3ed600b3955c82b89c48df7389827f40d7ca271b2aa291\"  ContainerID:\"024964b7fe579b000c3ed600b3955c82b89c48df7389827f40d7ca271b2aa291\"  IfName:\"eth0\"  NetworkName:\"aws-cni\"  Netns:\"/var/run/netns/cni-14347753-9458-72eb-481b-2c6eaea963f0\""}
{"level":"debug","ts":"2024-01-04T19:49:30.622Z","caller":"datastore/data_store.go:609","msg":"AssignPodIPv6Address: IPv6 address pool stats: assigned 1"}
{"level":"debug","ts":"2024-01-04T19:49:30.622Z","caller":"datastore/data_store.go:1305","msg":"Found a free IP not in DB - 2600:1f14:2868:6d00:f6d9::1"}
{"level":"debug","ts":"2024-01-04T19:49:30.622Z","caller":"datastore/data_store.go:638","msg":"Returning Free IP 2600:1f14:2868:6d00:f6d9::1"}
{"level":"debug","ts":"2024-01-04T19:49:30.622Z","caller":"datastore/data_store.go:609","msg":"New v6 IP from PD pool- 2600:1f14:2868:6d00:f6d9::1"}{"level":"info","ts":"2024-01-04T19:49:30.622Z","caller":"datastore/data_store.go:649","msg":"assignPodIPAddressUnsafe: Assign IP 2600:1f14:2868:6d00:f6d9::1 to sandbox aws-cni/024964b7fe579b000c3ed600b3955c82b89c48df7389827f40d7ca271b2aa291/eth0"}
{"level":"debug","ts":"2024-01-04T19:49:30.623Z","caller":"rpc/rpc.pb.go:713","msg":"VPC V6 CIDR 2600:1f14:2868:6d00::/56"}
{"level":"info","ts":"2024-01-04T19:49:30.623Z","caller":"rpc/rpc.pb.go:713","msg":"Send AddNetworkReply: IPv4Addr , IPv6Addr: 2600:1f14:2868:6d00:f6d9::1, DeviceNumber: 0, err: <nil>"}
{"level":"info","ts":"2024-01-04T19:49:41.425Z","caller":"rpc/rpc.pb.go:731","msg":"Received DelNetwork for Sandbox 024964b7fe579b000c3ed600b3955c82b89c48df7389827f40d7ca271b2aa291"}
{"level":"debug","ts":"2024-01-04T19:49:41.425Z","caller":"rpc/rpc.pb.go:731","msg":"DelNetworkRequest: K8S_POD_NAME:\"nginx-deployment-79b55879bb-tx7r2\"  K8S_POD_NAMESPACE:\"default\"  K8S_POD_INFRA_CONTAINER_ID:\"024964b7fe579b000c3ed600b3955c82b89c48df7389827f40d7ca271b2aa291\"  Reason:\"PodDeleted\"  ContainerID:\"024964b7fe579b000c3ed600b3955c82b89c48df7389827f40d7ca271b2aa291\"  IfName:\"eth0\"  NetworkName:\"aws-cni\""}
{"level":"debug","ts":"2024-01-04T19:49:41.425Z","caller":"ipamd/rpc_handler.go:260","msg":"UnassignPodIPAddress: IP address pool stats: total 281474976710656, assigned 2, sandbox aws-cni/024964b7fe579b000c3ed600b3955c82b89c48df7389827f40d7ca271b2aa291/eth0"}
{"level":"info","ts":"2024-01-04T19:49:41.425Z","caller":"datastore/data_store.go:1111","msg":"unassignPodIPAddressUnsafe: Unassign IP 2600:1f14:2868:6d00:f6d9::1 from sandbox aws-cni/024964b7fe579b000c3ed600b3955c82b89c48df7389827f40d7ca271b2aa291/eth0"}
{"level":"info","ts":"2024-01-04T19:49:41.425Z","caller":"ipamd/rpc_handler.go:260","msg":"UnassignPodIPAddress: sandbox aws-cni/024964b7fe579b000c3ed600b3955c82b89c48df7389827f40d7ca271b2aa291/eth0's ipAddr 2600:1f14:2868:6d00:f6d9::1, DeviceNumber 0"}
{"level":"info","ts":"2024-01-04T19:49:41.425Z","caller":"rpc/rpc.pb.go:731","msg":"Send DelNetworkReply: IPv4Addr 2600:1f14:2868:6d00:f6d9::1, DeviceNumber: 0, err: <nil>"}

What you expected to happen:

The log line should be changed to be similar to the ADD event log line
I suggest something like `{"level":"info","ts":"2024-01-04T19:49:41.425Z","caller":"rpc/rpc.pb.go:731","msg":"Send DelNetworkReply: IPv4Addr , IPv6Addr 2600:1f14:2868:6d00:f6d9::1, DeviceNumber: 0, err: "}

How to reproduce it (as minimally and precisely as possible):

Use the following eksctl config

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: ipv6-vpc-cni-log-test
  region: us-west-2
  version: "1.27"
kubernetesNetworkConfig:
  ipFamily: IPv6
addons:
  - name: vpc-cni
    version: latest
  - name: coredns
    version: latest
  - name: kube-proxy
    version: latest
iam:
  withOIDC: true
managedNodeGroups:
  - name: ipv6-nodegroup
    instanceType: c5.xlarge
    ssh:
      allow: true
      enableSsm: true

Save as ipv6-cluster.yaml
Run eksctl create cluster -f ipv6-cluster.yaml
Start and stop a pod, in this instance I did this through applying and deleting an nginx deployment that spins up a single pod
Observe ipamd.log for log message

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.2", GitCommit:"7f6f68fdabc4df88cfea2dcf9a19b2b830f1e647", GitTreeState:"clean", BuildDate:"2023-05-17T14:20:07Z", GoVersion:"go1.20.4", Compiler:"gc", Platform:"darwin/arm64"}
  • CNI Version:
v.1.16.0
  • OS (e.g: cat /etc/os-release):
NAME="Amazon Linux"
VERSION="2"
ID="amzn"
ID_LIKE="centos rhel fedora"
VERSION_ID="2"
PRETTY_NAME="Amazon Linux 2"
ANSI_COLOR="0;33"
CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2"
HOME_URL="https://amazonlinux.com/"
SUPPORT_END="2025-06-30"
  • Kernel (e.g. uname -a):
Linux ip-192-168-27-39.us-west-2.compute.internal 5.10.201-191.748.amzn2.x86_64 #1 SMP Mon Nov 27 18:28:14 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

yup, I'll have a fix out shortly

⚠️COMMENT VISIBILITY WARNING⚠️

Comments on closed issues are hard for our team to see.
If you need more assistance, please open a new issue that references this one.
If you wish to keep having a conversation with other community members under this issue feel free to do so.