A helm chart that deploys the kanidm project into any kubernetes cluster.
Read the kanidm docs/install the server.
The helm chart provides (eventually)
- persistent storage
- secret management
- container security
- auto-scaling (hpa)
- microK8s (or similar local k8s) installed for a local deployment
- an account with a cloud provider azure, gcp, aws, digital ocean
clone repo:
$ git clone git@github.com:ronamosa/kanidm-k8s.git
$ cd kanidm-k8s/
Quick version:
$ sudo snap install microk8s --classic
$ microk8s.status
$ microk8s kubectl get all
Read microK8s docs for more details.
enable helm3
$ microk8s enable helm3
coming soon.
The kanidm server needs the following configs to boot up:
server.toml
config file- persistent volume for the db file
- (optional) tls cert, key
This file is coming by way of configMap
under config.yaml
...
data:
server.toml: |
bindaddress = "0.0.0.0:8443"
db_path = "/db/kanidm.db"
tls_ca = "/ssl/ca.pem"
tls_cert = "/ssl/cert.pem"
tls_key = "/ssl/key.pem"
log_level = "verbose"
...
Local: for a dev, local setup you can use self-signed certs.
A few things to keep in mind with this local setup & self-signed certs
- cert uses DNS in the SAN, over IP
- need to add the kanidm pods clusterIP entry to
/etc/hosts
to make this local DNS SAN work. - you only get the kanidm pod clusterIP after you deploy the chart.
Create your certs by running ./certs/insecure_generate_tls.sh
.
The default DNS alt_name will be k8s.kanidm.local
, so once you have the kanidm pods clusterIP, this will become the entry in your /etc/hosts
file e.g.
10.152.183.232 k8s.kanidm.local
The ca.pem
, cert.pem
and key.pem
files will be deployed as kubernetes secret objects by converting each file to a base64 hash and entering it into the helm/templates/secrets.yaml
file:
$ base64 -w0 certs/ca.pem
$ base64 -w0 certs/cert.pem
$ base64 -w0 certs/key.pem
copy/paste that hashes into the appropriate data blocks in secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-certs
type: Opaque
data:
ca.pem: <base64_hash_goes_here>
cert.pem: <base64_hash_goes_here>
key.pem: <base64_hash_goes_here>
The volumes for the database and the tls certs will be setup by deployment.yaml
as follows:
# mount points
volumeMounts:
- name: "config"
mountPath: "/data"
- name: "data"
mountPath: "/db"
- name: "certs"
mountPath: "/ssl"
# volumes
volumes:
- name: config
configMap:
name: {{ .Release.Name }}-config
- name: certs
secret:
secretName: {{ .Release.Name }}-certs
- name: data
{{- if .Values.persistence.enabled }}
persistentVolumeClaim:
claimName: {{ if .Values.persistence.existingClaim }}{{ .Values.persistence.existingClaim }}{{- else }}{{ .Values.persistence.staticClaimName }}{{- end }}
{{- else }}
emptyDir: {}
{{- end }}
Note: configMap will mount to /data
, cert data is coming from k8s secrets, and /db
data is persisted using a PersistentVolume and PersistentVolumeClaims.
You want your kanidm database mounted at /db
to persist when the pod dies, so create a persistent volume for your microK8s setup as follows:
microk8s.kubectl apply -f ./persistent-storage/pv-hostpath.yaml
Check its all setup and ready to go:
microk8s.kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv-hostpath 1Gi RWO Retain Released default/pv-hostpath manual 2d
$ microk8s.helm3 install --debug kanidm helm/kanidm
get clusterIP address
$ microk8s.kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kanidm ClusterIP 10.152.183.232 <none> 8443/TCP,3389/TCP 32m
kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 23h
update /etc/hosts
e.g.
10.152.183.232 k8s.kanidm.local
jump into the kanidm pod e.g.
$ microk8s.kubectl exec -ti kanidm-7b869bdcbc-ljcdr bash
run recover_account
command
$ /sbin/kanidmd recover_account -c /data/server.toml -n admin
while still in the kanidm pod, run the following
$ /sbin/kanidmd domain_name_change -c /data/server.toml -n idm.example.com
test kanidm setup (note: you need kanidm
binary installed first, see kanidm docs for instructions)
$ kanidm self whoami -C ca.pem -H https://hx0.kanidm.local:8443 --name anonymous
successful output looks like
name: anonymous
spn: anonymous@example.com
display: anonymous
uuid: 00000000-0000-0000-0000-ffffffffffff
groups: [Group { name: "anonymous", uuid: "00000000-0000-0000-0000-ffffffffffff" }]
claims: []
# upgrade helm chart
$ microk8s.helm3 upgrade --install --debug kanidm helm/kanidm
# delete helm chart
$ microk8s.helm3 delete kanidm
# check PersistentVolumes, PersistentVolumeClaims
$ microk8s.kubectl get pv,pvc