This lab is to learn how to integrate identity into a Kubernetes deployment. Slides for the lab - https://www.slideshare.net/MarcBoorshtein/k8s-identity-management. NOTE this lab is NOT designed to be a self learning lab. Its meant to be presented in a classroom setting. We have open sourced the lab as a thank you to the community that has helped us along the way while building out our projects and this training. We hope that by exploring this repo, we can provide you ideas as to how to solve your own cluster's identity needs.
This lab will setup a single node Kubernetes cluster with:
- Kubernetes Dashboard 2.0 beta
- OpenUnison for authentication and automated provisioning (https://github.com/OpenUnison/openunison-k8s-activedirectory)
- MariaDB as a database for OpenUnison
- Postfix as a "black hole" for SMTP (doesn't work at the moment as intended)
- Ingress NGINX controller
pen
as the bare-metal load balancer
The lab will take you through integrating OpenUnison for SSO, enabling the Kubernetes audit log, debugging RBAC policies and setting up pod security policies.
To run this lab you will need:
- Active Directory domain controller configured with LDAPS
- A read only service account for the domain controller
- A user with a givenName, sn, samAccountName and mail attribute
- The domain controller's certificate in PEM format
- A VM with 2 processors and 4G of RAM with Ubuntu server 18.04 or better (NOT the live install)
- A system with ansible on it to run the deployment playbook
- An ssh key to copy
The playbooks were tested with http://cdimage.ubuntu.com/releases/18.04.3/release/ubuntu-18.04.3-server-amd64.iso as well as with VMs on DigitalOcean and AWS using their standard Ubuntu images.
Copy your ssh key
ssh-copy-id user@x.x.x.x
Edit inventory.ini Replace the IP address with the IP of your server(s), update the variables with the correct information.
Copy your PEM file to ldaps.pem
Run Playbook
ansible-playbook -i ./inventory.ini --user=user --extra-vars='ansible_sudo_pass=password' deploy_all.yaml
Grab some coffee, this will take 10-15 minutes to run.
NOTE At the moment Chrome doesn't seem to like the self signed certs in conjunction with nip.io addresses. Use FireFox.
- https://ou.apps.IP.nip.io/
- Login with the username / password - k8s-lab/$tart123
- Logout
- SSH to your server, user name
root
and this ssh key: - Make yourself an administrator
/usr/bin/mysql -u root -h $(/usr/bin/kubectl get svc -n mariadb -o json | /snap/bin/jq -r .items[0].spec.clusterIP) --password=start123 -e "insert into userGroups (userId,groupId) values (2,1);" unison
- Make yourself a cluster administrator
/usr/bin/mysql -u root -h $(/usr/bin/kubectl get svc -n mariadb -o json | /snap/bin/jq -r .items[0].spec.clusterIP) --password=start123 -e "insert into userGroups (userId,groupId) values (2,2);" unison
- Log back in
- Click on Kubernetes Dashboard
- SSH to your server
- Get api server parameter flags
kubectl describe configmap api-server-config -n openunison
- Export CA certificate
kubectl get secret ou-tls-certificate -n openunison -o json | jq -r '.data."tls.crt"' | base64 -d > /etc/kubernetes/pki/ou-ca.pem
- Update
/etc/kubernetes/manifests/kube-apiserver.yaml
with output of #2 - clear your k8s config
rm /root/.kube/config
kubectl get pods --all-namespaces
- Load token
kubectl get pods --all-namespaces
- Logout of openunison
watch kubectl get pods --all-namespaces
-
Login to your openunison with the user
makens
and the password$tart123
-
Setup kubectl using your token
-
Try to create a NS
kubectl create ns mynewns
, it will fail -
Enable audit logging:
mkdir /var/log/k8s
mkdir /etc/kubernetes/audit
cp k8s-audit-policy.yaml /etc/kubernetes/audit
- Edit
/etc/kubernetes/manifests/kube-apiserver.yaml
- add to
command
- add to
- --audit-log-path=/var/log/k8s/audit.log - --audit-log-maxage=1 - --audit-log-maxbackup=10 - --audit-log-maxsize=10 - --audit-policy-file=/etc/kubernetes/audit/k8s-audit-policy.yaml
- add:
- mountPath: /var/log/k8s name: var-log-k8s readOnly: false - mountPath: /etc/kubernetes/audit name: etc-kubernetes-audit readOnly: true
to
volumeMounts
section- add:
- hostPath: path: /var/log/k8s type: DirectoryOrCreate name: var-log-k8s - hostPath: path: /etc/kubernetes/audit type: DirectoryOrCreate name: etc-kubernetes-audit
to
volumes
-
Once the api server is running again, login as makens again and try creating a namespace again
kubectl create ns mynewns
, it will fail -
Look for the audit logs message
grep makens /var/log/k8s/audit.log
-
Generate RBAC rules from
audit2rbac
, replaceIP
with the IP of your cluster./audit2rbac --filename=/var/log/k8s/audit.log --user=https://ou.apps.IP.nip.io/auth/idp/k8sIdp#makens > newrbac.yaml
-
Set your context to admin
export KUBECONFIG=/root/.kube/config-admin
9 . Import the RBACkubectl create -f ./newrbac.yaml
-
Unset your kubeconfig to go back to your default
export KUBECONFIG=
-
kubectl create ns mynewns, SUCCESS!
- Create the policies -
kubectl create -f ./podsecuritypolicies.yaml
- Edit
/etc/kubernetes/manifests/kube-apiserver.yaml
, change--enable-admission-plugins=NodeRestriction
to--enable-admission-plugins=PodSecurityPolicy,NodeRestriction
- Save
- Delete all your pods
kubectl delete pods --all-namespaces --all
- Once done, check if OpenUnison is running and what policy its running under
kubectl describe pods -l application=openunison-orchestra -n openunison
- Chcek if tthe ingress pod is running
kubectl get pods -n ingress-nginx
- Check if mariadb is running
kubectl get pods -n mariadb
- Look at the events for both the
mariadb
andingress-nginx
namespace -kubectl get events -n mariadb
/kubectl get events -n ingress-nginx
- Why isn't it running? :
kubectl create -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ingress-nginx
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: privileged-psp
subjects:
# For the kubeadm kube-system nodes
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
EOF
- Update the ingress-nginx
Deployment
to force a redeploy -kubectl edit deployment nginx-ingress-controller -n ingress-nginx
- Fix mariadb:
kubectl create -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: mariadb
namespace: mariadb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: privileged-psp
subjects:
# For the kubeadm kube-system nodes
- kind: ServiceAccount
name: default
namespace: mariadb
EOF