Chaper 11.2.1 Queries about Nodeport when requesting one of node
Bonjaski opened this issue · 1 comments
Environments
- Cluster version: v1.21.4
- Virtual Machine: Oracle VM
- CNI: Calico
Problem
Different results from the book.
''' Book '''
Did you also notice where the pod thought the request came from? Look at the Client IP at the end of the response. That’s not the IP of the computer from which I sent the request. You may have noticed that it’s the IP of the node I sent the request to. I explain why this is and how you can prevent it in section 11.2.3.
''''
[node info]
root@k8s-m:~# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-m Ready control-plane,master 86d v1.21.4 192.168.100.10 <none> Ubuntu 20.04.3 LTS 5.4.0-88-generic docker://20.10.13
k8s-w1 Ready <none> 86d v1.21.4 192.168.100.101 <none> Ubuntu 20.04.3 LTS 5.4.0-88-generic docker://20.10.13
k8s-w2 Ready <none> 86d v1.21.4 192.168.100.102 <none> Ubuntu 20.04.3 LTS 5.4.0-88-generic docker://20.10.13
k8s-w3 Ready <none> 86d v1.21.4 192.168.100.103 <none> Ubuntu 20.04.3 LTS 5.4.0-88-generic docker://20.10.13
[Request to nodes result]
# Requesting k8s-w1 node
$ curl 192.168.100.101:30080
==== REQUEST INFO
Request processed by Kiada 0.5 running in pod "kiada-001" on node "k8s-w1".
Pod hostname: kiada-001; Pod IP: 172.16.228.71; Node IP: 192.168.100.101; Client IP: ::ffff:10.0.2.15
==== REQUEST INFO
Request processed by Kiada 0.5 running in pod "kiada-003" on node "k8s-w2".
Pod hostname: kiada-003; Pod IP: 172.16.46.4; Node IP: 192.168.100.102; Client IP: ::ffff:172.16.228.64
==== REQUEST INFO
Request processed by Kiada 0.5 running in pod "kiada-canary" on node "k8s-w3".
Pod hostname: kiada-canary; Pod IP: 172.16.197.4; Node IP: 192.168.100.103; Client IP: ::ffff:172.16.228.64
It's not the Node IP (192.168.100.10 * ) I have no idea where these IP (10.0.2.15, 172.16.228.64, 172.16.228.64) came from..
and it's fixed value but no idea where it came from
"k8s-w1" ↔ 10.0.2.15
"k8s-w2" ↔ 172.16.228.64
"k8s-w3" ↔ 172.16.228.64
[All resource Information]
root@k8s-m:~# kubectl get all -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default pod/nginx 1/1 Running 0 21d 172.16.228.70 k8s-w1 <none> <none>
john pod/kiada-001 2/2 Running 0 8h 172.16.228.71 k8s-w1 <none> <none>
john pod/kiada-002 2/2 Running 0 8h 172.16.197.3 k8s-w3 <none> <none>
john pod/kiada-003 2/2 Running 0 8h 172.16.46.4 k8s-w2 <none> <none>
john pod/kiada-canary 2/2 Running 0 8h 172.16.197.4 k8s-w3 <none> <none>
john pod/nginx 1/1 Running 0 22d 172.16.228.68 k8s-w1 <none> <none>
john pod/quiz 2/2 Running 0 23d 172.16.46.3 k8s-w2 <none> <none>
john pod/quote-001 2/2 Running 0 23d 172.16.46.1 k8s-w2 <none> <none>
john pod/quote-002 2/2 Running 0 23d 172.16.228.65 k8s-w1 <none> <none>
john pod/quote-003 2/2 Running 0 23d 172.16.197.2 k8s-w3 <none> <none>
john pod/quote-canary 2/2 Running 0 23d 172.16.197.1 k8s-w3 <none> <none>
kube-system pod/calico-kube-controllers-6fd7b9848d-k7v4w 1/1 Running 0 86d 172.16.29.3 k8s-m <none> <none>
kube-system pod/calico-node-nz65k 1/1 Running 0 86d 192.168.100.102 k8s-w2 <none> <none>
kube-system pod/calico-node-pd9pt 1/1 Running 0 86d 192.168.100.10 k8s-m <none> <none>
kube-system pod/calico-node-w9rf5 1/1 Running 0 86d 192.168.100.103 k8s-w3 <none> <none>
kube-system pod/calico-node-z82zf 1/1 Running 0 86d 192.168.100.101 k8s-w1 <none> <none>
kube-system pod/coredns-558bd4d5db-d78qm 1/1 Running 0 86d 172.16.29.1 k8s-m <none> <none>
kube-system pod/coredns-558bd4d5db-kzpkt 1/1 Running 0 86d 172.16.29.2 k8s-m <none> <none>
kube-system pod/etcd-k8s-m 1/1 Running 0 86d 192.168.100.10 k8s-m <none> <none>
kube-system pod/kube-apiserver-k8s-m 1/1 Running 0 86d 192.168.100.10 k8s-m <none> <none>
kube-system pod/kube-controller-manager-k8s-m 1/1 Running 7 86d 192.168.100.10 k8s-m <none> <none>
kube-system pod/kube-proxy-h2c5v 1/1 Running 0 86d 192.168.100.103 k8s-w3 <none> <none>
kube-system pod/kube-proxy-kt7kv 1/1 Running 0 86d 192.168.100.102 k8s-w2 <none> <none>
kube-system pod/kube-proxy-qgpjp 1/1 Running 0 86d 192.168.100.101 k8s-w1 <none> <none>
kube-system pod/kube-proxy-znxn4 1/1 Running 0 86d 192.168.100.10 k8s-m <none> <none>
kube-system pod/kube-scheduler-k8s-m 1/1 Running 6 86d 192.168.100.10 k8s-m <none> <none>
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 86d <none>
default service/my-service ClusterIP 10.98.28.169 <none> 80/TCP 22d app=MyApp
john service/kiada NodePort 10.99.142.250 <none> 80:30080/TCP,443:30443/TCP 8h app=kiada
john service/quiz ClusterIP 10.104.206.158 <none> 80/TCP 23d app=quiz
john service/quote ClusterIP 10.97.190.49 <none> 80/TCP 23d app=quote
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 86d k8s-app=kube-dns
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
kube-system daemonset.apps/calico-node 4 4 4 4 4 kubernetes.io/os=linux 86d calico-node docker.io/calico/node:v3.22.1 k8s-app=calico-node
kube-system daemonset.apps/kube-proxy 4 4 4 4 4 kubernetes.io/os=linux 86d kube-proxy k8s.gcr.io/kube-proxy:v1.21.10 k8s-app=kube-proxy
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
kube-system deployment.apps/calico-kube-controllers 1/1 1 1 86d calico-kube-controllers docker.io/calico/kube-controllers:v3.22.1 k8s-app=calico-kube-controllers
kube-system deployment.apps/coredns 2/2 2 2 86d coredns k8s.gcr.io/coredns/coredns:v1.8.0 k8s-app=kube-dns
NAMESPACE NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
kube-system replicaset.apps/calico-kube-controllers-6fd7b9848d 1 1 1 86d calico-kube-controllers docker.io/calico/kube-controllers:v3.22.1 k8s-app=calico-kube-controllers,pod-template-hash=6fd7b9848d
kube-system replicaset.apps/coredns-558bd4d5db 2 2 2 86d coredns k8s.gcr.io/coredns/coredns:v1.8.0 k8s-app=kube-dns,pod-template-hash=558bd4d5db
It was caused by 2nd Edition I posted in wrong place