kong ingress controller bypassing the oidc plugin in kubernetes
Closed this issue · 2 comments
Hi,
Summary
I have deployed the kong with oidc container v2.3.3-1 (https://github.com/revomatico/docker-kong-oidc/releases/tag/2.3.3-1) on the kubernetes in AWS. I have to integrate the keycloak with this kong. But after doing all the configuration in keycloak and creating the kong plugin entity using YAML, the request to microservice is bypassing the oidc plugin and I can directly access the service from ingress.
Steps To reproduce
- Created the OIDC Plugin Entity using YAML:
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
labels:
global: "true"
metadata:
name: oidc
namespace: kong
config:
client_id: kong_api_access
client_secret: 093c6dd1-XXXX-XXXX-XXXX-XXXXXXXXXXXX
scope: openid
realm: kong
discovery: http://keycloak.abc.com/auth/realms/kong/.well-known/openid-configuration
introspection_endpoint: http://keycloak.abc.com/auth/realms/kong/protocol/openid-connect/token/introspect
plugin: oidc
- Created the test microservice using YAML:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echo
annotations:
plugins.konghq.com: oidc
kubernetes.io/ingress.class: kong
spec:
rules:
- http:
paths:
- path: /echo
backend:
serviceName: echo
servicePort: 80
- Accessing the service using https://test.abc.com/echo
This is directly getting me to the service instead of loading the keycloak login page
Additional Information
- I can see the oidc plugin is enabled in kong
curl -s --insecure https://127.0.0.1:8444/plugins/enabled
{"enabled_plugins":["grpc-web","correlation-id","pre-function","cors","rate-limiting","loggly","hmac-auth","zipkin","request-size-limiting","azure-functions","request-transformer","oauth2","response-transformer","ip-restriction","statsd","jwt","proxy-cache","basic-auth","key-auth","http-log","oidc","session","datadog","tcp-log","prometheus","post-function","ldap-auth","acl","grpc-gateway","file-log","syslog","udp-log","response-ratelimiting","aws-lambda","bot-detection","acme","request-termination"]}
I am assuming you have:
- deployed the kong ingress controller? It will be enabled by default if you are using helm chart to deploy Kong.
- set
redirect_uri
plugin param as well
We do not use yet ingress controller, as we have a lot of stateless configuration to migrate to ingress definitions and CRD. But will do this soon and be able to test it thoroughly, but as of right now we do not have the resources to test your scenario.
- What do the logs say, with
log_level=debug
orKONG_LOG_LEVEL=debug
env say? - What do the ingress controller logs say?
Thanks @cristichiru for checking this out. The problem is resolved now.
I was using incorrect annotation in ingress.yaml. Instead of plugins.konghq.com: oidc
, the correct annotation is konghq.com/plugins: oidc
. The correct ingress yaml is:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echo
annotations:
konghq.com/plugins: oidc
kubernetes.io/ingress.class: kong
spec:
rules:
- http:
paths:
- path: /echo
backend:
serviceName: echo
servicePort: 80