jetstack/kube-oidc-proxy

Kube-OIDC-Proxy 404 error on EKS cluster using Istio

widdix123 opened this issue · 19 comments

I get this error when i run a curl command on kube-oidc-proxy url deployed on istio.

The certs are generated by cert-manager and the URL for kube-oidc-proxy is hosted on Istio Ingress Gateway instead of Load Balancer

Pods running on kube-oidc-proxy

NAME READY STATUS RESTARTS AGE
kube-oidc-proxy-6ddf69485b-phln4 2/2 Running 0 41m

$ kubectl logs kube-oidc-proxy-6ddf69485b-phln4 -n kube-oidc-proxy -c kube-oidc-proxy

I0608 13:25:16.526357 1 secure_serving.go:178] Serving securely on [::]:443
I0608 13:25:39.737367 1 probe.go:69] OIDC provider initialized, proxy ready

$ curl https://oidc.v4.xxxxx.com -v

  • Trying 18.211.247.153...
  • TCP_NODELAY set
  • Connected to oidc.v4.xxxxxx.com (xxxxxxx) port 443 (#0)
  • ALPN, offering h2
  • ALPN, offering http/1.1
  • successfully set certificate verify locations:
  • CAfile: /etc/ssl/cert.pem
    CApath: none
  • TLSv1.2 (OUT), TLS handshake, Client hello (1):
  • TLSv1.2 (IN), TLS handshake, Server hello (2):
  • TLSv1.2 (IN), TLS handshake, Certificate (11):
  • TLSv1.2 (IN), TLS handshake, Server key exchange (12):
  • TLSv1.2 (IN), TLS handshake, Server finished (14):
  • TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
  • TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
  • TLSv1.2 (OUT), TLS handshake, Finished (20):
  • TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
  • TLSv1.2 (IN), TLS handshake, Finished (20):
  • SSL connection using TLSv1.2 / ECDHE-RSA-CHACHA20-POLY1305
  • ALPN, server accepted to use h2
  • Server certificate:
  • subject: CN=oidc.v4.xxxxxx.com
  • start date: Jun 8 06:50:47 2020 GMT
  • expire date: Sep 6 06:50:47 2020 GMT
  • subjectAltName: host "oidc.v4.xxxxxx.com" matched cert's "oidc.v4.xxxxxx.com"
  • issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3
  • SSL certificate verify ok.
  • Using HTTP2, server supports multi-use
  • Connection state changed (HTTP/2 confirmed)
  • Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
  • Using Stream ID: 1 (easy handle 0x7fa04680a400)

GET / HTTP/2
Host: oidc.v4.xxxxxx.com
User-Agent: curl/7.64.1
Accept: /

  • Connection state changed (MAX_CONCURRENT_STREAMS == 4294967295)!
    < HTTP/2 404
    < date: Mon, 08 Jun 2020 13:41:26 GMT
    < server: istio-envoy
    <
  • Connection #0 to host oidc.v4.fpcomplete.com left intact
  • Closing connection 0

Can someone suggest what am i do wrong ?

@JoshVanL - Any inputs we are actually stuck and the documentation does not give a clarity as to say how will kube-oidc-proxy be accessed ?

Hi @widdix123, looking at the logs, it looks like your request isn't getting to kube-oidc-proxy at all, especially since you got a 404 rather than a 401 on a clean curl request.

I'd suggest taking a look at your routes on the istio side to make sure that's set up correctly.

Thanks @JoshVanL - Yes tls was getting terminated at the Istio IngressGateway that i managed to fix . I get now 401 as suggested

I am using Gangway to fetch the kubernetes config file and Dex to authorize the user (both running on Istio). The certificate i used is of LetsEncrypt CA for all domains and the same has been used

On running kubectl get pods i get the " x509: certificate signed by unknown authority" error

I0609 08:34:25.389313    5204 round_trippers.go:444] Response Headers:
I0609 08:34:25.389423    5204 cached_discovery.go:121] skipped caching discovery info due to Get https://oidc.v4.xxxxx.com/api?timeout=32s: x509: certificate signed by unknown authority

The requests are reaching the pods now

2020-06-09 03:04:36.501543 I | http: TLS handshake error from 127.0.0.1:47520: remote error: tls: bad certificate
2020-06-09 03:04:42.058269 I | http: TLS handshake error from 127.0.0.1:47672: remote error: tls: bad certificate
2020-06-09 03:04:47.477603 I | http: TLS handshake error from 127.0.0.1:47812: remote error: tls: bad certificate

Server : https://oidc.v4.xxxxx.com ( Kube OIDC Proxy URL)

Kubeconfig genereated by gangway for user widdix@xxxx.com which has a RBAC role created in the cluster

apiVersion: v1
clusters:
- cluster:
    certificate-authority: ca-xxxx-v4-prod.pem
    server: https://oidc.v4.xxxxx.com
  name: xxx-v4-xxx
contexts:
- context:
    cluster: xxx-v4-xxx
    user: widdix@xxxx.com
  name: xxxx-v4-xxxx
current-context: xxx-v4-xxxx
kind: Config
preferences: {}
users:
- name: widdix123@xxxx.com
  user:
    auth-provider:
      config:
        client-id: gangway
        client-secret: xxxxxx
        id-token: xxxxxx
        idp-issuer-url: https://dex.v4.xxxxxx.com/dex
        refresh-token: xxxxxxx
      name: oidc

ConfigMap of gangway

apiVersion: v1
kind: ConfigMap
metadata:
  name: gangway
  namespace: gangway
data:
  gangway.yaml: |
    clusterName: "xxxx-v4-prod"
    authorizeURL: "https://dex.v4.xxxx.com/dex/auth"
    tokenURL: "https://dex.v4.xxxxx.com/dex/token"
    redirectURL: "https://gangway.v4.xxxxx.com/callback"
    scopes: ["openid", "profile", "email", "offline_access"]
    clientID: "gangway"
    clientSecret: xxxxx
    usernameClaim: "name"
    emailClaim: "email"
    apiServerURL: "https://oidc.v4.xxxx.com"

Secrets for KubeOIDC

apiVersion: v1
data:
  oidc.ca-pem: <LetsEncrypt CA>
  oidc.issuer-url: (https://dex.v4.xxxxx.com/dex)
  oidc.username-claim: (email)
  oidc.client-id: (gangway)
kind: Secret
metadata:
  name: kube-oidc-proxy-config
  namespace: kube-oidc-proxy
type: Opaque
---
apiVersion: v1
data:
  tls.crt: <copied from secret of value generated by letsencrypt during certificate creation>
  tls.key: <copied from secret of value generated by letsencrypt during certificate creation>
kind: Secret
metadata:
  name: kube-oidc-proxy-tls
  namespace: kube-oidc-proxy
type: kubernetes.io/tls

Can you please take a look on the kubeconfig generated by gangway and suggest ?

@JoshVanL - When i use the attached pem file for letsencrypt , i now get

error: You must be logged in to the server (the server has asked for the client to provide credentials)

letsencrpyt.pem.txt

Error in the kube-oidc-proxy pods

kubectl logs -f kube-oidc-proxy-854dd94875-7ts2b -n kube-oidc-proxy -c kube-oidc-proxy


I0609 07:20:42.107742       1 dynamic_serving_content.go:130] Starting serving-cert::/etc/oidc/tls/crt.pem::/etc/oidc/tls/key.pem
I0609 07:20:42.107937       1 secure_serving.go:178] Serving securely on [::]:443
I0609 07:20:42.108005       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0609 07:21:04.638533       1 probe.go:69] OIDC provider initialized, proxy ready
E0609 07:22:39.877413       1 proxy.go:215] unable to authenticate the request via TokenReview due to an error (127.0.0.1:36882): error authenticating using token review: [invalid bearer token, unknown]
E0609 07:22:40.131877       1 proxy.go:215] unable to authenticate the request via TokenReview due to an error (127.0.0.1:36882): error authenticating using token review: [invalid bearer token,

While running kubectl get pods

I0609 13:23:26.547435   13326 round_trippers.go:438] GET https://oidc.v4.xxx.com/api?timeout=32s 401 Unauthorized in 339 milliseconds
I0609 13:23:26.547464   13326 round_trippers.go:444] Response Headers:
I0609 13:23:26.547477   13326 round_trippers.go:447]     Date: Tue, 09 Jun 2020 07:53:26 GMT
I0609 13:23:26.547488   13326 round_trippers.go:447]     Content-Type: text/plain; charset=utf-8
I0609 13:23:26.547496   13326 round_trippers.go:447]     X-Content-Type-Options: nosniff
I0609 13:23:26.547502   13326 round_trippers.go:447]     Content-Length: 13
I0609 13:23:26.560399   13326 request.go:942] Response Body: Unauthorized
I0609 13:23:26.575080   13326 request.go:1145] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" }

@widdix123 Looks like the token is not getting sent right from the client, or the server is miss configured potentially (though less likely the client being generated from gangway. Would you be able to share your deployment configuration for kube-oidc-proxy?

@JoshVanL - I just added one option "- --token-passthrough

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: kube-oidc-proxy
  name: kube-oidc-proxy
  namespace: kube-oidc-proxy
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kube-oidc-proxy
  template:
    metadata:
      labels:
        app: kube-oidc-proxy
    spec:
      serviceAccountName: kube-oidc-proxy
      containers:
      - image: quay.io/jetstack/kube-oidc-proxy:v0.3.0
        ports:
        - containerPort: 443
        - containerPort: 8080
        readinessProbe:
          httpGet:
            path: /ready
            port: 8080
          initialDelaySeconds: 15
          periodSeconds: 10
        name: kube-oidc-proxy
        command: ["kube-oidc-proxy"]
        args:
          - "--secure-port=443"
          - "--tls-cert-file=/etc/oidc/tls/crt.pem"
          - "--tls-private-key-file=/etc/oidc/tls/key.pem"
          - "--oidc-client-id=$(OIDC_CLIENT_ID)"
          - "--oidc-issuer-url=$(OIDC_ISSUER_URL)"
          - "--oidc-username-claim=$(OIDC_USERNAME_CLAIM)"
          - "--oidc-ca-file=/etc/oidc/oidc-ca.pem"
          - "--token-passthrough"
        env:
        - name: OIDC_CLIENT_ID
          valueFrom:
            secretKeyRef:
              name: kube-oidc-proxy-config
              key: oidc.client-id
        - name: OIDC_ISSUER_URL
          valueFrom:
            secretKeyRef:
              name: kube-oidc-proxy-config
              key: oidc.issuer-url
        - name: OIDC_USERNAME_CLAIM
          valueFrom:
            secretKeyRef:
              name: kube-oidc-proxy-config
              key: oidc.username-claim
        volumeMounts:
          - name: kube-oidc-proxy-config
            mountPath: /etc/oidc
            readOnly: true
          - name: kube-oidc-proxy-tls
            mountPath: /etc/oidc/tls
            readOnly: true
      volumes:
        - name: kube-oidc-proxy-config
          secret:
            secretName: kube-oidc-proxy-config
            items:
            - key: oidc.ca-pem
              path: oidc-ca.pem
        - name: kube-oidc-proxy-tls
          secret:
            secretName: kube-oidc-proxy-tls
            items:
            - key: tls.crt
              path: crt.pem
            - key: tls.key
              path: key.pem

@JoshVanL - One more thing, the kubeconfig that i download gives me this error on pods and during kubectl execution

Logs:

2020-06-09 10:46:21.218084 I | http: TLS handshake error from 127.0.0.1:47954: remote error: tls: bad certificate
2020-06-09 10:46:26.664830 I | http: TLS handshake error from 127.0.0.1:48218: remote error: tls: bad certificate

Kubectl execution

I0609 16:16:26.549335   16906 round_trippers.go:449] Response Headers:
I0609 16:16:26.549377   16906 cached_discovery.go:121] skipped caching discovery info due to Get https://oidc.v4.xxxxx.com/api?timeout=32s: x509: certificate signed by unknown authority

I have to manually edit the kubeconfig and add the correct CA.cert of letsencrypt to get me 401 error .

I thought that i update you this if it helps in any way

The reason for needing to manually add the CA is cert-manager does not bundle in the CA when requesting certificates from Let's Encrypt. You could use a private PKI CA to solve this problem.

It's hard to see where the issue is here, and am starting to think that it's an issue with the token itself. Could you run kube-oidc-proxy with a larger log level? (--v=10).
Could you also try with a raw curl request? It may well be an issue with the kubeconfig that I'm not seeing:

curl -k https://https://oidc.v4.xxxxx.com -H 'Authorization: bearer xxxxxxx'

@JoshVanL On startup got this error after adding --v=10, i get 1 error

I0609 11:31:32.114451       1 dynamic_serving_content.go:111] Loaded a new cert/key pair for "serving-cert::/etc/oidc/tls/crt.pem::/etc/oidc/tls/key.pem"
I0609 11:31:32.115663       1 dynamic_serving_content.go:130] Starting serving-cert::/etc/oidc/tls/crt.pem::/etc/oidc/tls/key.pem
I0609 11:31:32.115836       1 tlsconfig.go:200] loaded serving cert ["serving-cert::/etc/oidc/tls/crt.pem::/etc/oidc/tls/key.pem"]: "oidc.v4.fpcomplete.com" [serving,client] validServingFor=[oidc.v4.xxxx.com] issuer="Let's Encrypt Authority X3" (2020-06-08 06:50:47 +0000 UTC to 2020-09-06 06:50:47 +0000 UTC (now=2020-06-09 11:31:32.115804994 +0000 UTC))
I0609 11:31:32.115884       1 secure_serving.go:178] Serving securely on [::]:443
I0609 11:31:32.115958       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0609 11:31:52.880250       1 probe.go:69] OIDC provider initialized, proxy ready
I0609 11:31:52.880282       1 probe.go:70] OIDC provider initialized, readiness check returned error: oidc: verify token: oidc: expected audience "gangway" got []**

Curl output where i added the token after bearer

curl -v https://oidc.v4.fpcomplete.com -H 'Authorization: bearer xxxxxxx'

I0609 11:34:29.904176       1 request.go:1068] Request Body: {"kind":"TokenReview","apiVersion":"authentication.k8s.io/v1","metadata":{"creationTimestamp":null},"spec":{"token":".xxxx"},"status":{"user":{}}}
I0609 11:34:29.904406       1 round_trippers.go:423] curl -k -v -XPOST  -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kube-oidc-proxy/v0.0.0 (linux/amd64) kubernetes/$Format" -H "Authorization: Bearer xxxxxx" 'https://172.20.0.1:443/apis/authentication.k8s.io/v1/tokenreviews'
I0609 11:34:29.913604       1 round_trippers.go:443] POST https://172.20.0.1:443/apis/authentication.k8s.io/v1/tokenreviews 201 Created in 9 milliseconds
I0609 11:34:29.913640       1 round_trippers.go:449] Response Headers:
I0609 11:34:29.913647       1 round_trippers.go:452]     Audit-Id: a8a29437-7213-4b21-8997-7892b69c09f3
I0609 11:34:29.913654       1 round_trippers.go:452]     Cache-Control: no-cache, private
I0609 11:34:29.913711       1 round_trippers.go:452]     Content-Type: application/json
I0609 11:34:29.913744       1 round_trippers.go:452]     Content-Length: 954
I0609 11:34:29.913753       1 round_trippers.go:452]     Date: Tue, 09 Jun 2020 11:34:29 GMT
I0609 11:34:29.913818       1 request.go:1068] Response Body: {"kind":"TokenReview","apiVersion":"authentication.k8s.io/v1","metadata":{"creationTimestamp":null},"spec":{"token":xxxx"},"status":{"user":{},"error":"[invalid bearer token, unknown]"}}
E0609 11:34:29.914876       1 proxy.go:215] unable to authenticate the request via TokenReview due to an error (127.0.0.1:43258): error authenticating using token review: [invalid bearer token, unknown]
I0609 11:34:29.914915       1 handlers.go:169] unauthenticated user request 127.0.0.1:43258

@JoshVanL - enabled the below flags in kube-odic-proxy.yaml but stll the same error

          - "--token-passthrough"
          - "--token-passthrough-audiences=gangway"
          - "--v=10"`

Logs

I0609 12:33:07.804316       1 probe.go:70] OIDC provider initialized, readiness check returned error: oidc: verify token: oidc: expected audience "gangway" got []

@widdix123 Token pass through is only really wanted if you are expecting to also have the Kubernetes service accounts authenticated through the kube-oidc-proxy. Not sure that is what you want with also having the ganway audience there ^.

Can you verify that the token you are passing to curl is valid against the dex public key referenced at the well known endpoint?

@JoshVanL - I did not how we could do that but i tried using this python script and the output is

TokenChecker.py.txt
jwt.py.txt

{"keys": []}

{'iss': 'https://dex.v4.xxxxxx.com/dex', 'sub': 'xxxxxxx', 'aud': 'gangway', 'exp': 1591787955, 'iat': 1591701555, 'at_hash': 'YNh1V1D7bDFqFOM-6wbypw', 'email': 'abhi@xxxx.com', 'email_verified': True, 'name': 'Abhishek Gupta'}

I provided a different dex url and it give "ERROR:root:Invalid Authorization header. JWT Signature verification failed"

So the token looks ok

@JoshVanL - I also verified the token using the keys using https://npm.runkit.com/jwk-to-pem site. It shows me that the token generated is correct and has my details

Also, if i dont enable passthrough i see only I0610 03:27:32.151909 1 handlers.go:169] unauthenticated user request 127.0.0.1:47980" in the log

But enabling passthrough option, i see the below also in the logs (--pass-through)

I0610 03:27:32.151707       1 round_trippers.go:443] POST https://172.20.0.1:443/apis/authentication.k8s.io/v1/tokenreviews 201 Created in 2 milliseconds
I0610 03:27:32.151736       1 round_trippers.go:449] Response Headers:
I0610 03:27:32.151742       1 round_trippers.go:452]     Content-Type: application/json
I0610 03:27:32.151746       1 round_trippers.go:452]     Content-Length: 978
I0610 03:27:32.151751       1 round_trippers.go:452]     Date: Wed, 10 Jun 2020 03:27:32 GMT
I0610 03:27:32.151755       1 round_trippers.go:452]     Audit-Id: 7577a2a3-2aa2-4448-8b44-5a45ffe38553
I0610 03:27:32.151758       1 round_trippers.go:452]     Cache-Control: no-cache, private
I0610 03:27:32.151782       1 request.go:1068] Response Body: {"kind":"TokenReview","apiVersion":"authentication.k8s.io/v1","metadata":{"creationTimestamp":null},"spec":{"token":"eyJhbGciOiJSUzI1NiIsImtpZCI6ImY3OGMxZDM5ODkwNTY3Y2Q2ZjcxYTg2Mjc0Mzk3ZDQwNThlN2ZkMTUifQ.eyJpc3MiOiJodHRwczovL2RleC52NC5mcGNvbXBsZXRlLmNvbS9kZXgiLCJzdWIiOiJDaFV4TURjNU1qZ3lOemMwTVRNd01ETTJPREUxTXpnU0JtZHZiMmRzWlEiLCJhdWQiOiJnYW5nd2F5IiwiZXhwIjoxNTkxODQ2MDQzLCJpYXQiOjE1OTE3NTk2NDMsImF0X2hhc2giOiJGX0NFejhfYU9xSGMyZUxNNW1ROTZRIiwiZW1haWwiOiJhYmhpQGZwY29tcGxldGUuY29tIiwiZW1haWxfdmVyaWZpZWQiOnRydWUsIm5hbWUiOiJBYmhpc2hlayBHdXB0YSJ9.f4E-sKt6GisE8kHBb-B5wIwbG5QfDh05mm5hOYUiUcC0WoEFsuITNxB7xfx7No79A9v57eGEtC11FaT1Ik0UElxUucWFuAfqYuaYwySSmLcW_HQtW0HSfQFJO-cA2vnGQ998ulqnc1sS5Yo1f_48aJDiN0aoZwsIpERwChcGFOkW7ljV5apfG0buvVOu86sxpP095LlqvbV7iSDxzU9dZFiVos3ZRyA6gjG7abxfxWmUbzGKtsxls8u55Ua6C3KtNsBpM1IG6RgpLkD4UckPXbRBfdjFr-zGhYoXn_GuHmbJ8EgduR0d522HyH2dULYbAAM8_RIlz6i-41HDjyoDjw","audiences":["gangway"]},"status":{"user":{},"error":"[invalid bearer token, unknown]"}}
E0610 03:27:32.151879       1 proxy.go:215] unable to authenticate the request via TokenReview due to an error (127.0.0.1:47980): error authenticating using token review: [invalid bearer token, unknown]
I0610 03:27:32.151909

@JoshVanL - On non Istio I get the same errror . Any suggestion ?

This is an important feature we would really want it to work

@widdix123 it is tricky to see where the issue is going wrong here when the tokens look okay.

Can you have a look at the secret for the kube-oidc-proxy-config and make sure that the items aren't double base64 encoded or some such? They are also case/space/newline sensitive.

@JoshVanL - Thanks that was the issue my configmap had some issue. I am able to login to run kubectl get pods and many more such things . It works as expected .

However, Is there any integration to the Kubernetes dashboard UI using auth header token ?

@widdix123 Glad you got it working :)

I'm guessing you are asking whether you are able to auth through your browser to an otherwise unsecured dashboard? This would be better suited using something like OAuth Proxy - that has a quite nice tutorial to follow.

kube-oidc-proxy is more for API calls or otherwise CLIs. You'll have a better time using OAuth there for browser based apps. Hope that helps!

Seems to be solved.

/close

@JoshVanL: Closing this issue.

In response to this:

Seems to be solved.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.