1st Ingress example
malrawi opened this issue · 7 comments
Hi,
Thanks a lot for this really great book.
I am facing an issue with with first Ingress example on Chapter 5, and I would really help any pointers. Please note that I am using minikube.
First let me check this, I may have missed it, the book does not explicitly state this, but I think I needed to create the kubia-nodeport NodePort service (from kubia-svc-nodeport.yaml), Correct?
I created the node port service, and I created the Ingress using the provided yaml. Now if I run the command kubectl describe ingress kubia I noticed that below Backends column I have this kubia-nodeport:80 (). I do not think that is good. I can also see that the IP 10.0.2.15 was assigned to the Ingress service.
Now, if I run the command kubectl describe svc kubia-nodeport I can see that an ip is assigned to the service (10.102.123.188), and there are 3 IP addresses listed under endpoints (172.17.0.5:8080,172.17.0.6:8080,172.17.0.7:8080), which are pods' IP addresses.
The problem is that curling kubia.example.com, 10.0.2.15 does not work. Please note that curling (and browsing) http://192.168.99.100:30123/ works. The IP 192.168.99.100 is minikube's ip address.
You don't need to use a NodePort service. A regular (ClusterIP) service will do just fine. In the book, the listing does show me using a NodePort service, because that's a requirement of GKE, not Kubernetes itself.
Curling http://192.168.99.100:30123 hits the NodePort service directly - it doesn't go through the Ingress.
To get curl http://kubia.example.com
to work, you need to edit your /etc/hosts
file and point kubia.example.com
to 192.168.99.100
.
Instead of editing /etc/hosts
, you could also use kubia.192.168.99.100.xip.io
instead of kubia.example.com
, but you'd need to modify your Ingress object (set the host property to kubia.192.168.99.100.xip.io
).
Let me know how it goes.
Thanks for the prompt reply!
If I make kubia.example.com point to 192.168.99.100, then is not that the same as curling 192.168.99.100, i.e, hitting the node port and not going through Ingress?
Should not I be able to use Ingress's IP address (10.0.2.15)? Or is that a deficiency of using minikube?
Oh, sorry. I was assuming that the ingress controller is exposed on port 80 of the minikube VM.
If it is, here's what happens:
curl http://kubia.example.com
will hit the ingress (assuming you have an/etc/hosts
entry) and it would then proxy the request to one of the podscurl http://192.168.99.100
will hit the ingress, but will NOT proxy the request to the pods, because the request won't contain the correctHost
header (the request thus won't match any ingress rules)curl http://192.168.99.100:30000
(assuming 30000 is the node port of the service) will NOT hit the ingress; it will go straight to the service (which then forwards the connection to one of the pods)
Where did you get the 10.0.2.15 IP address? That looks like an internal IP, which you most likely can't access outside of the minikube VM.
I see that IP address when I run both commands kubectl get ing kubia and kubectl describe ing kubia. It happens to be the IP address of eth0 on minikube's VM.
I updated the hosts entry as your suggested earlier (use 192.168.99.100 kubia.example.com). Here is what is happening now:
- curling the IP address default backend - 404, because the request is not proxied to a pod.
- curling the host produces You've hit kubia-hjhzq, because the request is proxied to a pod.
Correct?
Is there a flag, or some other mean that U can use to allow me see Ingress Controller being hit and know its decision on whether it should proxy the request or not?
Yes, curling the IP address won't match any rules and will hit the default backend.
The ingress controller runs as a pod (most likely in the kube-system namespace), so you should be able to get its logs with kubectl.
Thanks a lot for the prompt help and the great writing!
Oh, about the 10.0.2.15 IP address. The Minikube VM (when run through VirtualBox) gets two network interfaces. One has the IP 192.168.99.100 and is used for inbound traffic, the other has IP 10.0.2.15 and is used for outbound traffic. The ingress controller erroneously shows the outbound IP in the ingress object.