OIDC group membership is not passed to k8s cluster
justinas-b opened this issue · 10 comments
Hey,
In my OIDC token I have group membership defined for a user. Decoded token looks like:
{
"iss": "https://dex.example.com",
"sub": "**********",
"aud": "gangway",
"exp": *******,
"iat": ********,
"at_hash": "*******",
"email": "justinas@example.com",
"email_verified": true,
"groups": [
"test-group@example.com"
],
"name": "Justinas B"
}
However if I create a ClusterRoleBinding
as below, after successful login i am not assigned view
ClusterRole
:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: test
namespace: default
subjects:
# You can specify more than one "subject"
- kind: Group
name: test-group@example.com
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
I do not have any --oidc-groups-prefix=
defined on kube-oidc-proxy
deployment. Am i missing something here? If i create a ClusteRoleBinding
directly for a user - all works fine
Hi @justinas-b, looks like you need to set the flag --oidc-groups-claim=groups
to pick up that claim.
Hi @justinas-b, looks like you need to set the flag
--oidc-groups-claim=groups
to pick up that claim.
I am actually have the same issue, and do have that flag set. Still same problem.
Can you run it at a larger log level (--v=10
), hopefully(!) this should give us some output of the requests being made, and the group impersonation headers.
Ok, here are the logs after logging into gangway, downloading a kubeconfig, and trying to get pods with that kubeconfig:
Thanks @brokencode64, can you share the oidc flags on kube-oidc-proxy, can confirm that there is a group claim in the token used in the kubeconfig?
Yep, here is the config used. I am adding the prefix unlike @justinas-b , but have also tried without it. Each of the variables is from a secret, and I've double-checked those. the claim should resolve to: "oidc-groups-claim=groups"
name: kube-oidc-proxy
command: ["kube-oidc-proxy"]
args:
- "--secure-port=443"
- "--tls-cert-file=/etc/oidc/tls/crt.pem"
- "--tls-private-key-file=/etc/oidc/tls/key.pem"
- "--oidc-client-id=$(OIDC_CLIENT_ID)"
- "--oidc-issuer-url=$(OIDC_ISSUER_URL)"
- "--oidc-groups-claim=$(OIDC_GROUPS_CLAIM)"
- "--oidc-groups-prefix=oidc"
- "--oidc-username-claim=$(OIDC_USERNAME_CLAIM)"
- "--oidc-ca-file=/etc/oidc/oidc-ca.pem"
- "--v=10"
kubeconfig looks generally like this.
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: [removed]
server: https://kube-oidc-proxy.example.com
name: lab-aws
contexts:
- context:
cluster: lab-aws
user: test-user@example-lab
name: lab-aws
current-context: lab-aws
kind: Config
preferences: {}
users:
- name: test-user@example-lab
user:
auth-provider:
config:
client-id: gangway
client-secret: [removed]
id-token: [removed]
idp-issuer-url: https://dex.example.com
refresh-token: [removed]
name: oidc
Decoded token looks like this (confirmed it has the appropriate groups):
{
"iss": "https://dex.example.com",
"sub": "[removed]",
"aud": "gangway",
"exp": 1606906394,
"iat": 1606819994,
"at_hash": "[removed]",
"email": "test-user@example..com",
"email_verified": true,
"groups": [
"group1",
"test-group"
],
"name": "test-user"
}
@brokencode64 You will need to decode the token, there are various online and CLI tools out there;
@brokencode64 You will need to decode the token, there are various online and CLI tools out there;
Done, edited my original comment with the token. Looks good as far as I can tell.
Not too sure what is issue is in this case. Can you replace the "--oidc-groups-claim=$(OIDC_GROUPS_CLAIM)"
with "--oidc-groups-claim=groups"
to triple check it's passing in the right thing?
Success!
Somehow that parameter was causing issues. I hard-coced it and that part seemed to work. I still had one more problem after that it seems. Going back through the logs again it looks like my prefix naming was also off.
It looks like kube-oidc was getting this:
"Impersonate-Group: oidck8examplegroup"
When I thought it should be getting this:
oidc:examplegroup
I changed the clusterrole to match and now it works.