Seccomp policy fail when Kubernetes populate `securityContext` automatically.
jvanz opened this issue · 2 comments
This is a spin-off of the #4.
@chrisns reported that when using the following configuration:
kubectl apply -f - <<EOF
apiVersion: policies.kubewarden.io/v1alpha2
kind: ClusterAdmissionPolicy
metadata:
name: seccomp
spec:
module: registry://ghcr.io/kubewarden/policies/seccomp-psp:v0.1.0
rules:
- apiGroups: [""]
apiVersions: ["v1"]
resources: ["pods"]
scope: "Namespaced"
operations:
- CREATE
- UPDATE
mutating: false
settings:
allowed_profiles:
- runtime/default
EOF
He cannot deploy this pod:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: nginx-seccomp-allowed
annotations:
container.seccomp.security.alpha.kubernetes.io/nginx: runtime/default
labels:
app: nginx-seccomp
spec:
containers:
- name: nginx
image: nginx
EOF
Error from server: error when creating "tests/seccomp/allowed.yaml": admission webhook "seccomp.kubewarden.admission" denied the request: Resource violations: Invalid container seccomp profile types: RuntimeDefault
This happens due to the way the policy settings are implemented. When the users define the allowed_profiles
settings they are saying "please, check if the annotations in the pod have these profiles" (see the code here). However, Kubernetes updates the securityContext
automatically when these annotations are set. Which is causing this problem. A workaround for that is just to replicate the equivalent configuration in the other settings (profile_types
and localhost_profiles
). Something like:
kubectl apply -f - <<EOF
apiVersion: policies.kubewarden.io/v1alpha2
kind: ClusterAdmissionPolicy
metadata:
name: seccomp
spec:
module: registry://ghcr.io/kubewarden/policies/seccomp-psp:v0.1.0
rules:
- apiGroups: [""]
apiVersions: ["v1"]
resources: ["pods"]
scope: "Namespaced"
operations:
- CREATE
- UPDATE
mutating: false
settings:
allowed_profiles:
- runtime/default
profile_types:
- RuntimeDefault
EOF
I understand that is a pain. So, I'm opening this to improve that.
I see some options to fix this issue:
- Drop off the
allowed_profiles
and remove the check for annotations. This means that the policy will work only with k8s 1.19+ versions; - Drop off the
profile_types
andlocalhost_profiles
. This means that the policy will need to parse the entries to check the container'ssecurityContext
; - When validating the container
securityContext
, if no profile type/file is found, fallback to theallowed_profiles
- Dynamically merge the configuration from
allowed_profiles
andprofile_types
andlocalhost_profiles
At first glance, I think that if we can support only k8s 1.19 or later versions we can go with option 1. Otherwise, I would say that option 3 sounds good.
Another option is to force users to define equivalent allowed_profiles
and profile_types
/localhost_profile
configurations. I believe we can do that in the settings validation function.