Links:
- Certification - Certified Kubernetes Security Specialist (CKS)
- Open Source Curriculum for CNCF Certification Courses
- Trivy - a comprehensive and versatile security scanner
- Sysdig Documentation Hub
- The Falco Project -Cloud Native Runtime Security
- AppArmor Documentation
Links:
- Kubernetes Documentation:
- https://kubernetes.io/docs/ and their subdomains
- https://kubernetes.io/blog/ and their subdomains
- Tools:
- Trivy documentation https://aquasecurity.github.io/trivy/
- Falco documentation https://falco.org/docs/
- etcd documentation https://etcd.io/docs/
- App Armor:
source <(kubectl completion bash)
alias k=kubectl
complete -F __start_kubectl k # autocomplete k
set nu # set numbers
set tabstop=2 shiftwidth=2 expandtab # use 2 spaces instead of tab
set ai # autoindent: when go to new line keep same indentation
Links:
Commands:
kind create cluster --config kind-multi-node.yaml
Links:
Default deny all ingress traffic:
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
spec:
podSelector: {}
policyTypes:
- Ingress
Allow ingress traffic from CIDR, namespace and pod with correct label to port TCP/6379. Egress traffic is allowed from CIDR to port TPC/5978:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
Links:
Jobs for control plane and workers:
k apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job-master.yaml
k apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job-node.yaml
k get pods
k logs <JOB_FOR_MASTER>
k logs <JOB_FOR_NODE>
Links:
- 1.2.1 Ensure that the --anonymous-auth argument is set to false
- Set Kubelet Parameters Via A Configuration File
- Configuring each kubelet in your cluster using kubeadm
Define authorization in /var/lib/kubelet/config.yaml
:
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 10s
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
healthzBindAddress: 127.0.0.1
healthzPort: 10248
kind: KubeletConfiguration
kubeletCgroups: /systemd/system.slice
kubeReserved:
cpu: 200m
memory: 250Mi
nodeStatusUpdateFrequency: 10s
resolvConf: /run/systemd/resolve/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
Command:
sudo systemctl restart kubelet
Change in /etc/kubernetes/manifests/kube-apiserver.yaml
:
--anonymous-auth=false
Links:
Change in /etc/kubernetes/manifests/kube-apiserver.yaml
:
--profiling=false
Links:
Change in /etc/kubernetes/manifests/kube-apiserver.yaml
:
--authorization-mode=RBAC
Links:
Change in /etc/systemd/system/multi-user.target.wants/etcd.service
:
--client-cert-auth=true
Links:
Files:
/etc/kubernetes/manifests/kube-apiserver.yaml
/etc/kubernetes/manifests/kube-controller-manager.yaml
/etc/kubernetes/manifests/kube-scheduler.yaml
Changes:
...
spec:
containers:
- command:
...
- --profiling=false
...
Commands:
sudo systemctl restart kubelet
Links:
Changes in /var/lib/kubelet/config.yaml
:
authorization:
mode: Webhook
Commands:
sudo systemctl restart kubelet
Links:
- Certificates and Certificate Signing Requests
- Ingress with TLS
- netshoot: a Docker + Kubernetes network trouble-shooting swiss-army container
Secret with certificate:
apiVersion: v1
kind: Secret
type: kubernetes.io/tls
metadata:
name: seba-tls-certs
namespace: seba
data:
tls.crt: |
<base64-encoded cert data from file seba.crt>
tls.key: |
<base64-encoded key data from file seba.key>
Ingress with TLS:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: seba-tls-ingress
namespace: seba
spec:
tls:
- hosts:
- seba.svc
secretName: seba-tls-certs
rules:
- host: seba.svc
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: seba-svc
port:
number: 80
Commands:
openssl req -nodes -new -x509 -keyout seba.key -out seba.crt -subj "/CN=seba.svc"
kubectl run tmp-shell --rm -i --tty --image nicolaka/netshoot
> curl seba-svc.seba.svc.cluster.local
> curl -H "Host: seba.svc" http://<ingress-controller-ip>
Links:
Commands:
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
echo "$(cat kubectl.sha256) kubectl" | sha256sum --check
Links:
Service account:
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
kubernetes.io/enforce-mountable-secrets: "true"
name: my-serviceaccount
namespace: my-namespace
Role:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: my-namespace
name: pod-and-pod-logs-reader
rules:
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list"]
Role binding:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: null
name: my-serviceaccount-pod-and-pod-logs-reader
namespace: my-namespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-and-pod-logs-reader
subjects:
- kind: ServiceAccount
name: my-serviceaccount
namespace: my-namespace
Create new service account and new role:
kubectl create serviceaccount my-serviceaccount -n my-namespace --dry-run=client -o yaml
kubectl create role pod-and-pod-logs-reader \
--verb=get --verb=list \
--resource=pods --resource=pods/log \
--namespace=my-namespace \
--dry-run=client -o yaml
kubectl create rolebinding my-serviceaccount-pod-and-pod-logs-reader \
--role=pod-and-pod-logs-reader \
--serviceaccount=my-namespace:my-serviceaccount \
--namespace=my-namespace \
--dry-run=client -o yaml
Modify existing role:
kubectl -n my-namespace get sa
kubectl -n my-namespace get rolebindings.rbac.authorization.k8s.io -o yaml
kubectl -n my-namespace get pod-and-pod-logs-reader -o yaml
kubectl -n my-namespace edit pod-and-pod-logs-reader
Links:
- Restrict a Container's Access to Resources with AppArmor
- AppArmor Documentation
- AppArmor and Kubernetes
- Manage AppArmor profiles in Kubernetes with kube-apparmor-manager
AppArmor profile to deny write:
#include <tunables/global>
profile k8s-apparmor-example-deny-write flags=(attach_disconnected) {
#include <abstractions/base>
file,
# Deny all file writes.
deny /** w,
}
Load profile into all nodes:
# This example assumes that node names match host names, and are reachable via SSH.
NODES=($(kubectl get nodes -o name))
for NODE in ${NODES[*]}; do ssh $NODE 'sudo apparmor_parser -q <<EOF
#include <tunables/global>
profile k8s-apparmor-example-deny-write flags=(attach_disconnected) {
#include <abstractions/base>
file,
# Deny all file writes.
deny /** w,
}
EOF'
done
Pod with deny-write profile
apiVersion: v1
kind: Pod
metadata:
name: hello-apparmor
spec:
securityContext:
appArmorProfile:
type: Localhost
localhostProfile: k8s-apparmor-example-deny-write
containers:
- name: hello
image: busybox:1.28
command: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ]
Other approach using metadata and annotations:
apiVersion: v1
kind: Pod
metadata:
name: hello-restricted
annotations:
container.apparmor.security.beta.kubernetes.io/hello: localhost/k8s-apparmor-example-deny-write
spec:
containers:
- name: hello
image: busybox
command: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ]
Links:
Encode password:
echo -n '39528$vdg7Jb' | base64
Define secret with base64 encoded password:
apiVersion: v1
kind: Secret
metadata:
name: test-secret
data:
username: bXktYXBw
password: Mzk1MjgkdmRnN0pi
Container environment variable with data from secret:
apiVersion: v1
kind: Pod
metadata:
name: env-single-secret
spec:
containers:
- name: envars-test-container
image: nginx
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: backend-user
key: backend-username
Pod that access the secret through a volume:
apiVersion: v1
kind: Pod
metadata:
name: secret-test-pod
spec:
containers:
- name: test-container
image: nginx
volumeMounts:
# name must match the volume name below
- name: secret-volume
mountPath: /etc/secret-volume
readOnly: true
# The secret data is exposed to Containers in the Pod through a Volume.
volumes:
- name: secret-volume
secret:
secretName: test-secret
Commands:
kubectl create secret generic db-user-pass \
--from-literal=username=admin \
--from-literal=password='S!B\*d$zDsb='
kubectl create secret generic db-user-pass \
--from-file=./username.txt \
--from-file=./password.txt
kubectl get secrets
kubectl get secret db-user-pass -o jsonpath='{.data}'
echo 'UyFCXCpkJHpEc2I9' | base64 --decode
kubectl get secret db-user-pass -o jsonpath='{.data.password}' | base64 --decode
kubectl edit secrets db-user-pass
Links:
Runtime class:
# RuntimeClass is defined in the node.k8s.io API group
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
# The name the RuntimeClass will be referenced by.
# RuntimeClass is a non-namespaced resource.
name: myclass
# The name of the corresponding CRI configuration
handler: myconfiguration
Usage of runtime class in pod:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
runtimeClassName: myclass
# ...
Check, if gVisor
is used:
kubectl exec -n NAMESPACE_NAME POD_NAME -- dmesg
Links:
- Dockerfile Best Practices
- General best practices for writing Dockerfiles
- Best practices for Dockerfile instructions
- Security best practices
- Docker Security Cheat Sheet
- Top 20 Dockerfile best practices
Links:
Links:
Commands:
brew install trivy
trivy image python:3.4-alpine
trivy image --severity HIGH,CRITICAL python:3.4-alpine
trivy fs --scanners vuln,secret,misconfig myproject/
trivy k8s --report summary cluster
Links:
- Admission Controllers Reference
- Trivy Operator
- Using Kubernetes Admission Controllers
- Kubernetes Security Tools: OPA Gatekeeper & Trivy
- Adding Trivy Scanner as custom Admission Controller
- Certified Kubernetes Security Specialist (CKS) Preparation Part 7 — Supply Chain Security
- Trivy Kubernetes Admission webhook
Image policy webhook:
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: ImagePolicyWebhook
path: /etc/kubernetes/admission-control/imagepolicyconfig.yaml
Image policy config:
imagePolicy:
kubeConfigFile: /etc/kubernetes/admission-control/trivy-scanner.kubeconfig
allowTTL: 50
denyTTL: 50
retryBackoff: 500
defaultAllow: true # false, if deny images when image scanning service is not reachable
kubeconfig:
apiVersion: v1
kind: Config
clusters:
- cluster:
server: https://trivy-scanner<my-domain>/scan
name: okd
users:
- name: admin
user: {}
preferences: {}
contexts:
- context:
cluster: okd
user: admin
name: admin
current-context: admin
kube-api static pod:
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.124.20:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=192.168.124.20
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction,ImagePolicyWebhook
- --admission-control-config-file=/etc/kubernetes/admission-control/image-policy-webhook-conf.yaml
[...]
volumeMounts:
[...]
- mountPath: /etc/kubernetes/admission-control
name: etc-kubernetes-admission-control
readOnly: true
[...]
volumes:
[...]
- hostPath:
path: /etc/kubernetes/admission-control
type: DirectoryOrCreate
name: etc-kubernetes-admission-control
Links:
- The Falco Project -Cloud Native Runtime Security
- Falco
- Kubernetes Security Tools: Falco
- Default and local rules files
- Default rules
Custom rule:
- rule: Detect privilege escalation in /tmp
desc: Detect privilege escalationof binaries executed in /tmp
condition: >
evt.type = setresuid and evt.dir=> and
proc.exepath startswith /tmp/
output: "The binary %proc.name has tried to escalate privileges: %evt.args"
priority: debug
Commands:
falco -r /path/to/my/rules1.yaml -r /path/to/my/rules2.yaml
falco -M 45 -r /path/to/my/rules.yaml
Links:
- Best practices for operating containers - Immutability
- Configure a Security Context for a Pod or Container
- Improve the security of pods running on Kubernetes
Run as a non-root user:
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
containers:
- name: sec-ctx-demo
image: busybox:1.28
command: [ "sh", "-c", "sleep 1h" ]
securityContext:
runAsNonRoot: true
runAsUser: 1000
Read-only root file system:
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
containers:
- name: sec-ctx-demo
image: busybox:1.28
command: [ "sh", "-c", "sleep 1h" ]
securityContext:
readOnlyRootFilesystem: true
Run with allowPrivilegeEscalation=false:
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
allowPrivilegeEscalation: false
containers:
- name: sec-ctx-demo
image: busybox:1.28
command: [ "sh", "-c", "sleep 1h" ]
securityContext:
allowPrivilegeEscalation: false
Links:
- Auditing
- The Ultimate Guide to Audit Logging in Kubernetes: From Setup to Analysis
- Kubernetes Audit Logging
- How To Monitor Kubernetes Audit Logs
- How to monitor Kubernetes audit logs
- Monitor Audit Logs to Safeguard Your Kubernetes Infrastructure
- kube-apiserver Audit Configuration
kube-apiserver
flags:
- --audit-policy-file=/etc/kubernetes/audit-policy.yaml
- --audit-log-path=/var/log/kubernetes/audit/audit.log
- --audit-log-maxage=30
- --audit-log-maxbackup=1
Example audit policy file audit-policy.yaml
:
apiVersion: audit.k8s.io/v1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
- "RequestReceived"
rules:
# Log pod changes at RequestResponse level
- level: RequestResponse
resources:
- group: ""
# Resource "pods" doesn't match requests to any subresource of pods,
# which is consistent with the RBAC policy.
resources: ["pods"]
# Log "pods/log", "pods/status" at Metadata level
- level: Metadata
resources:
- group: ""
resources: ["pods/log", "pods/status"]
# Don't log requests to a configmap called "controller-leader"
- level: None
resources:
- group: ""
resources: ["configmaps"]
resourceNames: ["controller-leader"]
# Don't log watch requests by the "system:kube-proxy" on endpoints or services
- level: None
users: ["system:kube-proxy"]
verbs: ["watch"]
resources:
- group: "" # core API group
resources: ["endpoints", "services"]
# Don't log authenticated requests to certain non-resource URL paths.
- level: None
userGroups: ["system:authenticated"]
nonResourceURLs:
- "/api*" # Wildcard matching.
- "/version"
# Log the request body of configmap changes in kube-system.
- level: Request
resources:
- group: "" # core API group
resources: ["configmaps"]
# This rule only applies to resources in the "kube-system" namespace.
# The empty string "" can be used to select non-namespaced resources.
namespaces: ["kube-system"]
# Log configmap and secret changes in all other namespaces at the Metadata level.
- level: Metadata
resources:
- group: "" # core API group
resources: ["secrets", "configmaps"]
# Log all other resources in core and extensions at the Request level.
- level: Request
resources:
- group: "" # core API group
- group: "extensions" # Version of group should NOT be included.
# A catch-all rule to log all other requests at the Metadata level.
- level: Metadata
# Long-running requests like watches that fall under this rule will not
# generate an audit event in RequestReceived.
omitStages:
- "RequestReceived"