Issue with deployed rke2 second Ingress-Nginx Controller inside a Kubernetes cluster running on bare-metal.
Ganesh-hub-5 opened this issue · 16 comments
What happened:
We have deployed two Nginx ingress controller in our cluster. The first ingress controller works fine but we are facing issue with second one. When accessing the service through second ingress controller (http://x3.abc.com/sample), we get This site can’t be reached x3.abc.com took too long to respond. error. We followed the same step as the first ingress controller including assigning different IP address using metallb and different ingress class in different namespace.
What you expected to happen:
For testing purpose we deployed a simple nginx image using standard yaml files. Everything is up and running, No error logs in pods, service mapped correct endpoints and ingress resource got external IP assigned by Second nginx ingress controller But when we try to access http://x3.abc.com/sample gives Site can't be reached error.
NGINX Ingress controller version:
NGINX Ingress controller
Release: v1.10.1-hardened1
Build: git-b48c66a2f
Repository: https://github.com/rancher/ingress-nginx
nginx version: nginx/1.25.3
Kubernetes version:
Client Version: v1.30.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.11+rke2r1
Environment:
-
Cloud provider or hardware configuration: Metallb
-
OS: Red Hat Enterprise Linux
-
Kernel (e.g.
uname -a
): Linux -
RKE2 Version: v1.28.11
-
Metallb Version: v0.14.5
-
How was the ingress-nginx-controller installed:
-Below is the second nginx ingress controller passed values
helm -n abcx3apps get values rke2-ingress-nginx-abcx3
USER-SUPPLIED VALUES:
controller:
admissionWebhooks:
enabled: true
port: 8084
service:
port: 8084
allowSnippetAnnotations: true
config:
enable-real-ip: true
use-forwarded-headers: true
containerPort:
http: 8082
https: 8083
hostPort:
enabled: true
http: 8082
https: 8083
ingressClass: nginx-abcx3
ingressClassByName: true
ingressClassResource:
controllerValue: k8s.io/ingress-nginx-abcx3
enabled: true
name: nginx-abcx3
publishService:
enabled: true
service:
annotations:
metallb.universe.tf/loadBalancerIPs: 10.11.XXX.74
enabled: true
external:
enabled: true
externalTrafficPolicy: Local
type: LoadBalancer
watchIngressWithoutClass: false
global:
clusterCIDR: 10.42.0.0/XX
clusterCIDRv4: 10.42.0.0/XX
clusterDNS: 10.43.0.XX
clusterDomain: cluster.local
rke2DataDir: /var/lib/rancher/rke2
serviceCIDR: 10.43.0.0/XX
Below is User values passed for 1st nginx ingress controller
USER-SUPPLIED VALUES:
controller:
allowSnippetAnnotations: true
config:
enable-real-ip: true
use-forwarded-headers: true
publishService:
enabled: true
service:
enabled: true
external:
enabled: true
externalTrafficPolicy: Local
type: LoadBalancer
global:
clusterCIDR: 10.42.0.0/XX
clusterCIDRv4: 10.42.0.0/XX
clusterDNS: 10.43.0.XX
clusterDomain: cluster.local
rke2DataDir: /var/lib/rancher/rke2
serviceCIDR: 10.43.0.0/XX
-Below is the 2nd nginx ingress details:
-kubectl get all -n abcx3apps
NAME READY STATUS RESTARTS AGE
pod/rke2-ingress-nginx-abcx3-controller-7njfp 1/1 Running 0 21h
pod/rke2-ingress-nginx-abcx3-controller-868cd 1/1 Running 0 55m
pod/rke2-ingress-nginx-abcx3-controller-8px2f 1/1 Running 0 21h
pod/rke2-ingress-nginx-abcx3-controller-9p6f5 1/1 Running 0 21h
pod/rke2-ingress-nginx-abcx3-controller-c2652 1/1 Running 0 21h
pod/rke2-ingress-nginx-abcx3-controller-mmmkx 1/1 Running 0 21h
pod/rke2-ingress-nginx-abcx3-controller-q7qbk 1/1 Running 0 21h
pod/rke2-ingress-nginx-abcx3-controller-w78qw 1/1 Running 0 21h
pod/rke2-ingress-nginx-abcx3-controller-xclcj 1/1 Running 0 21h
pod/rke2-ingress-nginx-abcx3-controller-ztjh5 1/1 Running 0 21h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/rke2-ingress-nginx-abcx3-controller LoadBalancer 10.43.XX.1 10.11.XXX.74 80:30263/TCP,443:31106/TCP,5432:31407/TCP 38d
service/rke2-ingress-nginx-abcx3-controller-admission ClusterIP 10.43.67.XXX <none> 443/TCP 38d
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/rke2-ingress-nginx-abcx3-controller 10 10 10 10 10 kubernetes.io/os=linux 38d
-Below is the 1st nginx ingress details:
-kubectl get all -n kube-system
NAME READY STATUS RESTARTS AGE
pod/rke2-ingress-nginx-controller-2fhtg 1/1 Running 0 42d
pod/rke2-ingress-nginx-controller-6898n 1/1 Running 3 (17d ago) 32d
pod/rke2-ingress-nginx-controller-8ct96 1/1 Running 0 42d
pod/rke2-ingress-nginx-controller-bc475 1/1 Running 0 42d
pod/rke2-ingress-nginx-controller-htk7f 1/1 Running 0 42d
pod/rke2-ingress-nginx-controller-kjv7f 1/1 Running 0 42d
pod/rke2-ingress-nginx-controller-lkrq9 1/1 Running 0 42d
pod/rke2-ingress-nginx-controller-mqxt9 1/1 Running 0 42d
pod/rke2-ingress-nginx-controller-xkq9f 1/1 Running 0 42d
pod/rke2-ingress-nginx-controller-zm9zh 1/1 Running 0 42d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/rke2-ingress-nginx-controller LoadBalancer 10.43.89.XX 10.11.XXX.71 80:30264/TCP,443:32070/TCP 148d
service/rke2-ingress-nginx-controller-admission ClusterIP 10.43.131.XXX <none> 443/TCP 42d
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/rke2-ingress-nginx-controller 10 10 10 10 10 kubernetes.io/os=linux 42d
- Current State of the 2nd nginx ingress controller:
kubectl describe ingressclasses
Name: nginx-abcx3
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=rke2-ingress-nginx-abcx3
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=rke2-ingress-nginx
app.kubernetes.io/part-of=rke2-ingress-nginx
app.kubernetes.io/version=1.10.1
helm.sh/chart=rke2-ingress-nginx-4.10.101
Annotations: meta.helm.sh/release-name: rke2-ingress-nginx-abcx3
meta.helm.sh/release-namespace: abcx3apps
Controller: k8s.io/ingress-nginx-abcx3
Events: <none>
-Current State of the 1st nginx ingress controller:
Name: nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=rke2-ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=rke2-ingress-nginx
app.kubernetes.io/part-of=rke2-ingress-nginx
app.kubernetes.io/version=1.10.1
helm.sh/chart=rke2-ingress-nginx-4.10.101
Annotations: meta.helm.sh/release-name: rke2-ingress-nginx
meta.helm.sh/release-namespace: kube-system
Controller: k8s.io/ingress-nginx
Events: <none>
-Below is description of 1st ingress controller service
kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
Name: rke2-ingress-nginx-controller
Namespace: kube-system
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=rke2-ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=rke2-ingress-nginx
app.kubernetes.io/part-of=rke2-ingress-nginx
app.kubernetes.io/version=1.10.1
helm.sh/chart=rke2-ingress-nginx-4.10.101
Annotations: meta.helm.sh/release-name: rke2-ingress-nginx
meta.helm.sh/release-namespace: kube-system
metallb.universe.tf/ip-allocated-from-pool: mlops-pool
metallb.universe.tf/loadBalancerIPs: 10.11.XXX.71
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=rke2-ingress-nginx,app.kubernetes.io/name=rke2-ingress-nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.43.89.XX
IPs: 10.43.89.XX
LoadBalancer Ingress: 10.11.XXX.71
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 30264/TCP
Endpoints: 10.42.0.XXX:80,10.42.1.XX:80,10.42.10.XX:80 + 7 more...
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 32070/TCP
Endpoints: 10.42.0.XXX:443,10.42.1.XX:443,10.42.10.XX:443 + 7 more...
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 31324
Events: <none>
-Below is description of 2nd ingress controller service
kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
Name: rke2-ingress-nginx-abcx3-controller
Namespace: abcx3apps
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=rke2-ingress-nginx-abcx3
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=rke2-ingress-nginx
app.kubernetes.io/part-of=rke2-ingress-nginx
app.kubernetes.io/version=1.10.1
helm.sh/chart=rke2-ingress-nginx-4.10.101
Annotations: meta.helm.sh/release-name: rke2-ingress-nginx-abcx3
meta.helm.sh/release-namespace: abcx3apps
metallb.universe.tf/ip-allocated-from-pool: a3-pool
metallb.universe.tf/loadBalancerIPs: 10.11.XXX.74
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=rke2-ingress-nginx-abcx3,app.kubernetes.io/name=rke2-ingress-nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.43.XX.1
IPs: 10.43.XX.1
LoadBalancer Ingress: 10.11.XXX.74
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 30263/TCP
Endpoints: 10.42.0.XXX:8082,10.42.1.XX:8082,10.42.10.XX:8082 + 7 more...
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 31106/TCP
Endpoints: 10.42.0.XXX:8083,10.42.1.XX:8083,10.42.10.XX:8083 + 7 more...
Port: pgsql 5432/TCP
TargetPort: 5432/TCP
NodePort: pgsql 31407/TCP
Endpoints: 10.42.0.XXX:5432,10.42.1.XX:5432,10.42.10.XX:5432 + 7 more...
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 31086
Events: <none>
Below are the logs of 1st ingress controller pods No error logs
Below are the logs of 2nd ingress controller pods No error logs
I1122 08:46:12.363072 7 event.go:364] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"abcx3", Name:"sample-app-ingress", UID:"6c5faa0c-8b28-498a-9654-6c11ea072c1a", APIVersion:"networking.k8s.io/v1", ResourceVersion:"90167044", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I1122 08:46:12.454066 7 nginx.go:313] "Starting NGINX process"
I1122 08:46:12.454375 7 leaderelection.go:250] attempting to acquire leader lease abcx3apps/rke2-ingress-nginx-abcx3-leader...
I1122 08:46:12.454673 7 nginx.go:333] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I1122 08:46:12.455102 7 controller.go:193] "Configuration changes detected, backend reload required"
I1122 08:46:12.459153 7 status.go:85] "New leader elected" identity="rke2-ingress-nginx-abcx3-controller-c2652"
I1122 08:46:12.503203 7 controller.go:213] "Backend successfully reloaded"
I1122 08:46:12.503299 7 controller.go:224] "Initial sync, sleeping for 1 second"
I1122 08:46:12.503384 7 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"abcx3apps", Name:"rke2-ingress-nginx-abcx3-controller-bzk25", UID:"8be9b932-943c-40fb-b1d2-7868d77bf646", APIVersion:"v1", ResourceVersion:"90213878", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
- Describing daemonset of 1st ingress controller
Name: rke2-ingress-nginx-controller
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=rke2-ingress-nginx,app.kubernetes.io/name=rke2-ingress-nginx
Node-Selector: kubernetes.io/os=linux
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=rke2-ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=rke2-ingress-nginx
app.kubernetes.io/part-of=rke2-ingress-nginx
app.kubernetes.io/version=1.10.1
helm.sh/chart=rke2-ingress-nginx-4.10.101
Annotations: deprecated.daemonset.template.generation: 1
meta.helm.sh/release-name: rke2-ingress-nginx
meta.helm.sh/release-namespace: kube-system
Desired Number of Nodes Scheduled: 10
Current Number of Nodes Scheduled: 10
Number of Nodes Scheduled with Up-to-date Pods: 10
Number of Nodes Scheduled with Available Pods: 10
Number of Nodes Misscheduled: 0
Pods Status: 10 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=rke2-ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=rke2-ingress-nginx
app.kubernetes.io/part-of=rke2-ingress-nginx
app.kubernetes.io/version=1.10.1
helm.sh/chart=rke2-ingress-nginx-4.10.101
Service Account: rke2-ingress-nginx
Containers:
rke2-ingress-nginx-controller:
Image: rancher/nginx-ingress-controller:v1.10.1-hardened1
Ports: 80/TCP, 443/TCP, 8443/TCP
Host Ports: 80/TCP, 443/TCP, 0/TCP
SeccompProfile: RuntimeDefault
Args:
/nginx-ingress-controller
--publish-service=$(POD_NAMESPACE)/rke2-ingress-nginx-controller
--election-id=rke2-ingress-nginx-leader
--controller-class=k8s.io/ingress-nginx
--ingress-class=nginx
--configmap=$(POD_NAMESPACE)/rke2-ingress-nginx-controller
--validating-webhook=:8443
--validating-webhook-certificate=/usr/local/certificates/cert
--validating-webhook-key=/usr/local/certificates/key
--watch-ingress-without-class=true
--enable-metrics=false
Requests:
cpu: 100m
memory: 90Mi
Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
POD_NAME: (v1:metadata.name)
POD_NAMESPACE: (v1:metadata.namespace)
LD_PRELOAD: /usr/local/lib/libmimalloc.so
Mounts:
/usr/local/certificates/ from webhook-cert (ro)
Volumes:
webhook-cert:
Type: Secret (a volume populated by a Secret)
SecretName: rke2-ingress-nginx-admission
Optional: false
Node-Selectors: kubernetes.io/os=linux
Tolerations: <none>
Events: <none>
- Describing daemonset of 2nd ingress controller
Name: rke2-ingress-nginx-abcx3-controller
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=rke2-ingress-nginx-abcx3,app.kubernetes.io/name=rke2-ingress-nginx
Node-Selector: kubernetes.io/os=linux
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=rke2-ingress-nginx-abcx3
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=rke2-ingress-nginx
app.kubernetes.io/part-of=rke2-ingress-nginx
app.kubernetes.io/version=1.10.1
helm.sh/chart=rke2-ingress-nginx-4.10.101
Annotations: deprecated.daemonset.template.generation: 10
meta.helm.sh/release-name: rke2-ingress-nginx-abcx3
meta.helm.sh/release-namespace: abcx3apps
Desired Number of Nodes Scheduled: 10
Current Number of Nodes Scheduled: 10
Number of Nodes Scheduled with Up-to-date Pods: 10
Number of Nodes Scheduled with Available Pods: 10
Number of Nodes Misscheduled: 0
Pods Status: 10 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=rke2-ingress-nginx-abcx3
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=rke2-ingress-nginx
app.kubernetes.io/part-of=rke2-ingress-nginx
app.kubernetes.io/version=1.10.1
helm.sh/chart=rke2-ingress-nginx-4.10.101
Service Account: rke2-ingress-nginx-abcx3
Containers:
rke2-ingress-nginx-controller:
Image: rancher/nginx-ingress-controller:v1.10.1-hardened1
Ports: 8082/TCP, 8083/TCP, 8443/TCP
Host Ports: 8082/TCP, 8083/TCP, 0/TCP
SeccompProfile: RuntimeDefault
Args:
/nginx-ingress-controller
--publish-service=$(POD_NAMESPACE)/rke2-ingress-nginx-abcx3-controller
--election-id=rke2-ingress-nginx-abcx3-leader
--controller-class=k8s.io/ingress-nginx-abcx3
--ingress-class=nginx-abcx3
--configmap=$(POD_NAMESPACE)/rke2-ingress-nginx-abcx3-controller
--validating-webhook=:8443
--validating-webhook-certificate=/usr/local/certificates/cert
--validating-webhook-key=/usr/local/certificates/key
--ingress-class-by-name=true
--enable-metrics=false
--tcp-services-configmap=$(POD_NAMESPACE)/abc-services
Requests:
cpu: 100m
memory: 90Mi
Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
POD_NAME: (v1:metadata.name)
POD_NAMESPACE: (v1:metadata.namespace)
LD_PRELOAD: /usr/local/lib/libmimalloc.so
Mounts:
/usr/local/certificates/ from webhook-cert (ro)
Volumes:
webhook-cert:
Type: Secret (a volume populated by a Secret)
SecretName: rke2-ingress-nginx-abcx3-admission
Optional: false
Node-Selectors: kubernetes.io/os=linux
Tolerations: <none>
Events: <none>
To reproduce issue
First we will use 2nd ingress controller whose ingressclass is nginx-abcx3 and same deployment yaml and ingress yaml in both scenario
-** deployment yaml**
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample-app
namespace: abcx3ns
spec:
replicas: 1
selector:
matchLabels:
app: sample-app
template:
metadata:
labels:
app: sample-app
spec:
containers:
- name: sample-app
image: nginx:latest
ports:
- containerPort: 80
-** service yaml**
apiVersion: v1
kind: Service
metadata:
name: sample-app-service
namespace: abcx3ns
spec:
selector:
app: sample-app
ports:
- protocol: TCP
port: 80
targetPort: 80
Ingress yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: sample-app-ingress
namespace: abcx3ns
annotations:
kubernetes.io/ingress.class: "nginx-abcx3"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
ingressClassName: nginx-abcx3
rules:
- host: x3.abc.com
http:
paths:
- path: /sample
pathType: Prefix
backend:
service:
name: sample-app-service
port:
number: 80
Let's apply yamls and see ingress controller logs and deployment status
NAME READY STATUS RESTARTS AGE
pod/sample-app-587d9c6687-rsz8m 1/1 Running 0 45s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/sample-app-service ClusterIP 10.43.7.240 <none> 80/TCP 40s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/sample-app 1/1 1 1 45s
NAME DESIRED CURRENT READY AGE
replicaset.apps/sample-app-587d9c6687 1 1 1 45s
NAME CLASS HOSTS ADDRESS PORTS AGE
sample-app-ingress nginx-abcx3 x3.abc.com 10.11.XXX.74 80 4m41s
Ingress controller logs
I1122 09:53:26.271027 7 controller.go:193] "Configuration changes detected, backend reload required"
I1122 09:53:26.318661 7 controller.go:213] "Backend successfully reloaded"
I1122 09:53:26.318921 7 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"abcx3apps", Name:"rke2-ingress-nginx-abcx3-controller-bb7db", UID:"d36ab2c9-5320-4fdf-af1b-8d868844ddb2", APIVersion:"v1", ResourceVersion:"90214687", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I1122 09:54:33.348310 7 store.go:440] "Found valid IngressClass" ingress="abcx3ns/sample-app-ingress" ingressclass="nginx-abcx3"
I1122 09:54:33.348586 7 event.go:364] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"abcx3ns", Name:"sample-app-ingress", UID:"4bd60422-213c-4c9d-b899-11cb9a699b88", APIVersion:"networking.k8s.io/v1", ResourceVersion:"90242308", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I1122 09:54:33.349454 7 controller.go:193] "Configuration changes detected, backend reload required"
I1122 09:54:33.408225 7 controller.go:213] "Backend successfully reloaded"
I1122 09:54:33.408546 7 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"abcx3apps", Name:"rke2-ingress-nginx-abcx3-controller-bb7db", UID:"d36ab2c9-5320-4fdf-af1b-8d868844ddb2", APIVersion:"v1", ResourceVersion:"90214687", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I1122 09:55:20.722424 7 event.go:364] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"abcx3ns", Name:"sample-app-ingress", UID:"4bd60422-213c-4c9d-b899-11cb9a699b88", APIVersion:"networking.k8s.io/v1", ResourceVersion:"90242656", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
Lets try to access the application in browser we get below error
Lets see curl output
$ curl -iv http://x3.abc.com/sample
*Uses proxy env variable no_proxy == 'mumrhnsat.abcmcloud.com,127.0.0.0/8,XX.0.0.0/8,XXX.16.0.0/12,192.XXX.0.0/16,.svc,.cluster.local,10.XX.0.0,10.XX.0.0,10.XX.0.0,127.0.0.1,localhost,.abc.com'
* Trying 10.11.XXX.74...
* TCP_NODELAY set
* connect to 10.11.XXX.74 port 80 failed: Connection timed out
* Failed to connect to x3.abc.com port 80: Connection timed out
* Closing connection 0
curl: (7) Failed to connect to x3.abc.com port 80: Connection timed out
Now let's keep everything same and just change ingressclass of ingress resource to use ingressclass of 1st igress controller i.e nginx
-Ingress resource of 1st ingress controller
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: sample-app-ingress
namespace: abcx3ns
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
ingressClassName: nginx
rules:
- host: y3.abc.com
http:
paths:
- path: /sample
pathType: Prefix
backend:
service:
name: sample-app-service
port:
number: 80
Let's apply ingress yaml and see site in browser
kubectl get ing -n abcx3ns
NAME CLASS HOSTS ADDRESS PORTS AGE
sample-app-ingress nginx y3.abc.com 10.11.XXX.71 80 40s
1st ingress controller logs
I1122 10:12:49.193915 7 controller.go:193] "Configuration changes detected, backend reload required"
I1122 10:12:49.308725 7 controller.go:213] "Backend successfully reloaded"
I1122 10:12:49.309046 7 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"rke2-ingress-nginx-controller-bc475", UID:"e2e1a0f8-0f22-467c-98ef-fe1b3d47534c", APIVersion:"v1", ResourceVersion:"64608400", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I1122 10:13:11.053653 7 event.go:364] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"abcx3ns", Name:"sample-app-ingress", UID:"2ef0e5c5-2e6f-4607-a216-ce7bd2d514f9", APIVersion:"networking.k8s.io/v1", ResourceVersion:"90250061", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
W1122 10:13:11.054344 7 controller.go:1110] Error obtaining Endpoints for Service "domino-platform/nosuchservice": no object matching key "domino-platform/nosuchservice" in local store
And now site is accessible in browser
With ingress resource of 1st ingress controller we are able to access the site in browser but with 2nd ingress controller not a single application is accessible.
lets see curl output of y3.abc.com
curl -iv http://y3.abc.com/sample
* Uses proxy env variable no_proxy == 'mumrhnsat.abcmcloud.com,127.0.0.0/8,XX.0.0.0/8,XXX.16.0.0/12,192.XXX.0.0/16,.svc,.cluster.local,10.XX.0.0,10.XX.0.0,10.XX.0.0,127.0.0.1,localhost,.abc.com'
* Trying 10.11.XXX.71...
* TCP_NODELAY set
* Connected to y3.abc.com (10.11.XXX.71) port 80 (#0)
> GET /sample HTTP/1.1
> Host: y3.abc.com
> User-Agent: curl/7.61.1
> Accept: */*
>
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< Date: Fri, 22 Nov 2024 10:23:10 GMT
Date: Fri, 22 Nov 2024 10:23:10 GMT
< Content-Type: text/html
Content-Type: text/html
< Content-Length: 615
Content-Length: 615
< Connection: keep-alive
Connection: keep-alive
< Last-Modified: Wed, 02 Oct 2024 15:13:19 GMT
Last-Modified: Wed, 02 Oct 2024 15:13:19 GMT
< ETag: "66fd630f-267"
ETag: "66fd630f-267"
< Accept-Ranges: bytes
Accept-Ranges: bytes
<
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
* Connection #0 to host y3.abc.com left intact
This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
- remove the screenshots
- look at the template of a new bug report
- edit this issue description and answer the questions asked in the template of a new bug report in this issue description
- make sure to format everything in markdown
- without above, there is nothing to analyze or reproduce
- follow these steps to make sure that both controllers are unique https://kubernetes.github.io/ingress-nginx/faq/#multiple-controller-in-one-cluster
/remove-kind bug
/ind support
/kind suppport
@longwuyuan: The label(s) kind/suppport
cannot be applied, because the repository doesn't have them.
In response to this:
/kind suppport
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
/kind support
Hi @longwunyaun,
I have tried to answer all possible questions, please help me with that it would be appreciated
Now updated @longwuyuan ., anything missing now?
I was pasting screenshot because after copy pasting everything was coming in flat straight line
You can help the readers help you in many ways
- You can install 2 instances of the controller as suggested here https://kubernetes.github.io/ingress-nginx/faq/#multiple-controller-in-one-cluster
- You can read the questions asked in the template of a new bug report
- You can answer all the related questions
- You can make sure to provide controller details for both instances
- You can provide curl commands (with -iv) for both ingresses with their responses
- You can provide logs for both controllers to help readers guess what may have happened
- You can use markdown format to edit the issue description
Updated now, sorry for trouble. Please help
Hi @longwuyuan
Did you find any issue in this?
@Ganesh-hub-5 thanks for helping with detailed info.
The root cause as you see is this
connect to 10.11.XXX.74 port 80 failed: Connection timed out
If you install the ingress-nginx controller as per this project documentation then we can support further.
But you are using a build and a chart from Rancher so its better you ask for support there.
I will close this issue for now because if you don't use a build & release from this project, there is no action item pending on us.
The root cause begins with port 80 not availale and goes into hostPort being used etc. I already posted link earlier on how to differentiate 2 instances of the ingress-nginx controller in one cluster.
Once you have installed as per this project docs I posted earlier, you can edit all the info asked in the new issue template as per new state. Then you can reopen this issue.
/close
@longwuyuan: Closing this issue.
In response to this:
I will close this issue for now because if you don't use a build & release from this project, there is no action item pending on us.
The root cause begins with port 80 not availale and goes into hostPort being used etc. I already posted link earlier on how to differentiate 2 instances of the ingress-nginx controller in one cluster.
Once you have installed as per this project docs I posted earlier, you can edit all the info asked in the new issue template as per new state. Then you can reopen this issue.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.