NLK is not working as expected
Closed this issue · 4 comments
Describe the bug
NLK adds worker node machines as upstreams in the NGINX Plus servers.
I deployed the NGINX Ingress controller as a NodePort Service.
I have three worker machines, but only one pod for the NGINX Ingress controller. This means that two worker nodes do not have any NGINX Ingress controller pods.
When traffic arrives at a node that does not have any NGINX Ingress controller pod, it cannot reach the pods.
Nginx Dashboard displays upstreams as unhealthy.

I think;
#160
will solve the problem.
Is there anything specific I need to pay attention to?
The private IP addresses of the worker nodes and the Kubernetes CIDR block must match?
Expected behavior
NLK must add only nodes with nginx-ingress controller.
Your environment
Used helm chart.
image:
registry: ghcr.io
repository: nginxinc/nginx-loadbalancer-kubernetes
pullPolicy: Always
# Overrides the image tag whose default is the chart appVersion.
tag: latest
Can you please elaborate?
How are you deploying your ingress controller service? Or exposing your ingress deployment?
Interesting, we had not tested the case where a Node has no ingress controller.
Agreed that #160 should resolve this problem. Will see about getting this in my rotation as soon as possible.
Hi thanks for your response @brianehlert,
I already provided the necessary details. do you require further information?
"I deployed the NGINX Ingress controller as a NodePort Service."
I solved my problem by changing externalTrafficPolicy to Cluster from Local. I think we should add this to the documentation. If someone doesn't change this to Cluster, NLK will not work as expected.
I deployed NGINX Ingress through Helm, where Local is the default configuration.