jetstack/kube-lego

ingress.class not propagated

carlpett opened this issue · 4 comments

First off, I'm not sure which of these things are cause and effect, or if they are unrelated, so let me know if I should split this issue or rephrase it. Also, this is a (somewhat rambling) summary of a discussion in #kube-lego on the k8s Slack, I hope I didn't miss any context.

Edit: I solved the latter half of this issue by fixing my ingress configuration, but I'll keep the original description for now. Back to pre-edit

I had problems when trying out lego on my test-case today, replacing self-signed certs on our monitoring (grafana/kibana/etc) ingress point. To test, I have a fairly empty cluster. Here's my setup:

  • Kubernetes 1.7.7 (Azure ACS)
  • Nginx Ingress Controller (0.9.0-beta.17, installed via Helm), with custom ingress.class, kubernetes.io/ingress.class: monitoring, deployed to namespace monitoring.
  • Four services in the monitoring namespace, and an ingress resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: monitoring-web
  annotations:
    kubernetes.io/ingress.class: "monitoring"
    kubernetes.io/ingress.provider: "nginx"
    kubernetes.io/tls-acme: "true"
    ingress.kubernetes.io/rewrite-target: "/"

    ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
    ingress.kubernetes.io/auth-signin: "https://$host/oauth2/sign_in"
spec:
  rules:
  - http:
      paths:
      - path: /prometheus
        backend:
          serviceName: prometheus-server
          servicePort: 9090
  - http:
      paths:
      - path: /alertmanager
        backend:
          serviceName: prometheus-alertmanager
          servicePort: 80
  - http:
      paths:
      - path: /grafana
        backend:
          serviceName: grafana-grafana
          servicePort: 3000
  - http:
      paths:
      - path: /kibana
        backend:
          serviceName: kibana
          servicePort: 5601
  tls:
  - hosts:
    - monitoring.example.com
    secretName: monitoring-ingress-tls

So, up until this works fine with the self-signed setup. I then deployed kube-lego (to the kube-system namespace) using helm with the stable chart. However, it does not accept my "monitoring" class, so my first step is to upgrade to the canary tag (after reading some issue or post on Slack).

Adding LEGO_SUPPORTED_INGRESS_CLASS: "monitoring" to values.yaml, I can now start getting 404:s in the reachability tests. Looking at the generated ingress resource, it has not set the correct ingress.class, and is setting it to nginx instead. Bug or misconfiguration?

(Setting LEGO_DEFAULT_INGRESS_CLASS: "monitoring" LEGO_DEFAULT_INGRESS_PROVIDER: "nginx" "works", but from the docs I would have expected the default-values to be used only when I hadn't annotated the target ingresses myself? As an additional confusion, having default provider set to default class strikes me as backwards, shouldn't it be the other way around?)

Moving on, when I get the correct class and provider, I then start getting my certificates generated. I can read the certificates with kubectl and verify that they are indeed correctly generated from Let's Encrypt. However, my ingresses don't use them. Connecting to the ingress, I still get the Kubernetes Ingress Controller Fake Certificate. I've tried recreating the ingress, restarting the controller, recreating and restarting kube-lego... I can't figure out how it can use the wrong cert. I've tried setting the defaultSSLCertificate on the nginx-controller chart to point to the cert generated by kube-lego. Doesn't seem to make any difference.

There are no suspicious entries in the controller logs, and kube-lego also seems happy.

As ideas always come to late, I didn't look until now at the generated nginx-config checking for the certificate configuration. The ingress certs are not mentioned. The only configuration in there regarding certs is these lines:

ssl_certificate                         /ingress-controller/ssl/default-fake-certificate.pem;
ssl_certificate_key                     /ingress-controller/ssl/default-fake-certificate.pem;

Maybe this is actually an nginx-ingress bug?

Edit Having found this part, realizing that there needed to be a host: ... on my ingress paths wasn't far away. So, that part of the mystery is resolved as user error, then :)

What remains is the question about the ingress.class in the original post. Bug or user error? Let me know if I should close this.

@carlpett Thanks for the detailed issue, looks like I missed a TODO when I was re-introducing the provider concept --

// TODO: use the ingres class as specified on the ingress we are

I'll submit a PR this weekend to fix that as it only looks at the default you have set currently.

As for "having default provider set to default class strikes me as backwards", yeah it is backwards although to be backwards compatible it needed to be implemented that way as provider came after class even though it is in a way more of a low level configuration.

@jackhopner Nice, looking forward to it!

Regarding backwards compatibility, I see your point, but isn't this "forwards incompatible" instead? :)

Sorry about the wait, #313 @carlpett