istio/istio

Multiple Ingress resources with same host result in 404 for HTTPS

verysonglaa opened this issue ยท 12 comments

Bug description
Multiple ingress resources with the same hostname will result in http 404 with https (but not http) (same like https://istio.io/docs/ops/common-problems/network-issues/#404-errors-occur-when-multiple-gateways-configured-with-same-tls-certificate , I guess an ingress resource create also a gw in the background). Also gateways serving the same hosts will not work anymore.

I guess this should be mentioned in the docs as well. Also a solution without virtualservice but using multiple ingress resources with tls and the same hostname should be possible (we have many ingress in different namespaces from different teams but only one hostname). In 1.5.4. with istio-autogenerated-k8s-ingress this was no problem.

[ ] Configuration Infrastructure
[x] Docs
[ ] Installation
[x] Networking
[ ] Performance and Scalability
[ ] Policies and Telemetry
[ ] Security
[ ] Test and Release
[ ] User Experience
[ ] Developer Infrastructure

Expected behavior
HTTP 200 instead of 404 for https connections

Steps to reproduce the bug

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: echoserver-ingress-2
  namespace: echoserver
  annotations:
    kubernetes.io/ingress.class: istio
spec:
  rules:
  - host: testurl.com
    http:
      paths:
      - backend:
          serviceName: echo-service
          servicePort: 80
        path: /echoserver2
  tls:
  - hosts:
    - testurl.com
    secretName: ingressgateway-certs
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: echoserver-ingress-1
spec:
  rules:
  - host: testurl.com
    http:
      paths:
      - backend:
          serviceName: echo-service
          servicePort: 80
        path: /echoserver1
  tls:
  - hosts:
    - testurl.com
    secretName: ingressgateway-certs

now try to open both urls in browser:
testurl.com//echoserver2 -> 200
testurl.com//echoserver1 -> 404

Version (include the output of istioctl version --remote and kubectl version and helm version if you used Helm)
k8s: 1.15.7

istioctl version
client version: 1.6.0
control plane version: 1.6.0
data plane version: 1.6.0 (3 proxies)

How was Istio installed?
IstioOperator

Does this work if you have 1 Ingress like:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: echoserver-ingress-1
spec:
  rules:
  - host: testurl.com
    http:
      paths:
      - backend:
          serviceName: echo-service
          servicePort: 80
        path: /echoserver1
  - host: testurl.com
    http:
      paths:
      - backend:
          serviceName: echo-service
          servicePort: 80
        path: /echoserver2
  tls:
  - hosts:
    - testurl.com
    secretName: ingressgateway-certs

?

Or are the Ingress in different namespaces even?

@howardjohn yes this does work, but still need a solution for ingress in different namespaces, furthermore virtualservices on a gateway with the same host configured also have 404 for https after creating the ingress.

If you check the status you can see that there is no address defined for the second ingress

kubectl get ing -n echoserver
NAME                   HOSTS                                         ADDRESS         PORTS     AGE
echoserver-ingress-2     testurl.com   52.157.xx.xx   80, 443   17m
echoserver-ingress-1   testurl.com                   80, 443   45s

How do ingress resources work, is there some hidden gateway configured for every ingress in the background? How can i debug/troubleshoot this.

BTW: I corrected the ingress definition in the original post (adding namespace ingress class annotation)

Yeah internally we create a Gateway object for each one. You can see it from localhost:8080/debug/configz on istiod

I see the creation in the istiod logs, the difference between creating the ingresses is the line

2020-06-04T13:35:16.636976Z	warn	constructed http route config for port 443 with no vhosts; Setting up a default 404 vhost

for the second ingress, I guess this leads to 404.
here the full log while creating the two ingresses:

2020-06-04T13:34:54.910744Z	info	grpc: Server.Serve failed to complete security handshake from "10.240.0.33:53822": remote error: tls: error decrypting message
2020-06-04T13:34:55.657136Z	info	grpc: Server.Serve failed to complete security handshake from "10.240.0.52:47024": remote error: tls: error decrypting message
2020-06-04T13:34:57.851278Z	info	grpc: Server.Serve failed to complete security handshake from "10.240.0.20:36868": remote error: tls: error decrypting message
2020-06-04T13:34:58.467597Z	info	ingress event add for echoserver/echoserver-ingress
2020-06-04T13:34:58.567855Z	info	ads	Push debounce stable[29] 2: 100.179527ms since last change, 100.193227ms since last push, full=true
2020-06-04T13:34:58.570293Z	info	ads	XDS: Pushing:2020-06-04T13:34:58Z/28 Services:26 ConnectedEndpoints:3
2020-06-04T13:34:58.570528Z	info	ads	Pushing router~10.240.0.55~istio-ingressgateway-f99d68568-bn56q.istio-system~istio-system.svc.cluster.local-10
2020-06-04T13:34:58.570974Z	info	ads	LDS: PUSH for node:istio-ingressgateway-f99d68568-bn56q.istio-system listeners:2
2020-06-04T13:34:58.571070Z	warn	Gateway missing for route https.443.https-443-ingress-echoserver-ingress-2-echoserver-0.echoserver-ingress-2-istio-autogenerated-k8s-ingress.istio-system. This is normal if gateway was recently deleted. Have map[http.80:[port:<number:80 protocol:"HTTP" name:"http-80-ingress-echoserver-ingress-echoserver" > hosts:"*" ] https.443.https-443-ingress-echoserver-ingress-echoserver-0.echoserver-ingress-istio-autogenerated-k8s-ingress.istio-system:[port:<number:443 protocol:"HTTPS" name:"https-443-ingress-echoserver-ingress-echoserver-0" > hosts:"testurl.com" tls:<mode:SIMPLE credential_name:"ingressgateway-certs" > ]]
2020-06-04T13:34:58.571216Z	info	ads	RDS: PUSH for node:istio-ingressgateway-f99d68568-bn56q.istio-system routes:2
2020-06-04T13:34:58.579539Z	warn	Gateway missing for route https.443.https-443-ingress-echoserver-ingress-2-echoserver-0.echoserver-ingress-2-istio-autogenerated-k8s-ingress.istio-system. This is normal if gateway was recently deleted. Have map[http.80:[port:<number:80 protocol:"HTTP" name:"http-80-ingress-echoserver-ingress-echoserver" > hosts:"*" ] https.443.https-443-ingress-echoserver-ingress-echoserver-0.echoserver-ingress-istio-autogenerated-k8s-ingress.istio-system:[port:<number:443 protocol:"HTTPS" name:"https-443-ingress-echoserver-ingress-echoserver-0" > hosts:"testurl.com" tls:<mode:SIMPLE credential_name:"ingressgateway-certs" > ]]
2020-06-04T13:34:58.579729Z	info	ads	RDS: PUSH for node:istio-ingressgateway-f99d68568-bn56q.istio-system routes:3
2020-06-04T13:35:00.317053Z	info	grpc: Server.Serve failed to complete security handshake from "10.240.0.53:50166": remote error: tls: error decrypting message
2020-06-04T13:35:05.579890Z	info	grpc: Server.Serve failed to complete security handshake from "10.240.0.16:40582": remote error: tls: error decrypting message
2020-06-04T13:35:07.165151Z	info	ads	Push Status: {}
2020-06-04T13:35:09.453584Z	info	grpc: Server.Serve failed to complete security handshake from "10.240.0.33:54034": remote error: tls: error decrypting message
2020-06-04T13:35:12.779227Z	info	grpc: Server.Serve failed to complete security handshake from "10.240.0.53:50350": remote error: tls: error decrypting message
2020-06-04T13:35:16.262867Z	info	grpc: Server.Serve failed to complete security handshake from "10.240.0.16:40766": remote error: tls: error decrypting message
2020-06-04T13:35:16.533738Z	info	ingress event add for echoserver/echoserver-ingress-2
2020-06-04T13:35:16.634022Z	info	ads	Push debounce stable[30] 2: 100.157228ms since last change, 100.171028ms since last push, full=true
2020-06-04T13:35:16.636176Z	info	ads	XDS: Pushing:2020-06-04T13:35:16Z/29 Services:26 ConnectedEndpoints:3
2020-06-04T13:35:16.636382Z	info	ads	Pushing router~10.240.0.55~istio-ingressgateway-f99d68568-bn56q.istio-system~istio-system.svc.cluster.local-10
2020-06-04T13:35:16.636842Z	info	ads	LDS: PUSH for node:istio-ingressgateway-f99d68568-bn56q.istio-system listeners:2
2020-06-04T13:35:16.636926Z	warn	Gateway missing for route https.443.https-443-ingress-echoserver-ingress-echoserver-0.echoserver-ingress-istio-autogenerated-k8s-ingress.istio-system. This is normal if gateway was recently deleted. Have map[http.80:[port:<number:80 protocol:"HTTP" name:"http-80-ingress-echoserver-ingress-2-echoserver" > hosts:"*"  port:<number:80 protocol:"HTTP" name:"http-80-ingress-echoserver-ingress-echoserver" > hosts:"*" ] https.443.https-443-ingress-echoserver-ingress-2-echoserver-0.echoserver-ingress-2-istio-autogenerated-k8s-ingress.istio-system:[port:<number:443 protocol:"HTTPS" name:"https-443-ingress-echoserver-ingress-2-echoserver-0" > hosts:"testurl.com" tls:<mode:SIMPLE credential_name:"ingressgateway-certs" > ]]
2020-06-04T13:35:16.636976Z	warn	constructed http route config for port 443 with no vhosts; Setting up a default 404 vhost
2020-06-04T13:35:16.637138Z	info	ads	RDS: PUSH for node:istio-ingressgateway-f99d68568-bn56q.istio-system routes:3
2020-06-04T13:35:17.146081Z	info	grpc: Server.Serve failed to complete security handshake from "10.240.0.20:37190": remote error: tls: error decrypting message
2020-06-04T13:35:17.164998Z	info	ads	Push Status: {}

not sure why there are so many

info	grpc: Server.Serve failed to complete security handshake from "10.240.0.33:53822": remote error: tls: error decrypting message 

messages though

@howardjohn How would you recommend we re-use the same wildcard cert across multiple ingress objects? We rely on this extensively for our environments since we let our app teams create arbitrary hostnames on the ingress objects for use under a common domain - with the wildcard cert we obtained.

@howardjohn Can we remove the lifecycle/stale from this issue? Thanks!

not stale

not stale

Highly related to #21394

@howardjohn Can we unstale this issue again? Thanks!

๐Ÿšง This issue or pull request has been closed due to not having had activity from an Istio team member since 2021-02-23. If you feel this issue or pull request deserves attention, please reopen the issue. Please see this wiki page for more information. Thank you for your contributions.

Created by the issue and PR lifecycle manager.

is this fixed?