kubewarden/seccomp-psp-policy

Seccomp policy fail when Kubernetes populate `securityContext` automatically.

Closed this issue · 2 comments

jvanz commented

This is a spin-off of the #4.

@chrisns reported that when using the following configuration:

kubectl apply -f - <<EOF
apiVersion: policies.kubewarden.io/v1alpha2
kind: ClusterAdmissionPolicy
metadata:
  name: seccomp
spec:
  module: registry://ghcr.io/kubewarden/policies/seccomp-psp:v0.1.0
  rules:
    - apiGroups: [""]
      apiVersions: ["v1"]
      resources: ["pods"]
      scope: "Namespaced"
      operations:
      - CREATE
      - UPDATE
  mutating: false
  settings:
    allowed_profiles:
      - runtime/default
EOF

He cannot deploy this pod:

kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: nginx-seccomp-allowed
  annotations:
    container.seccomp.security.alpha.kubernetes.io/nginx: runtime/default
  labels:
    app: nginx-seccomp
spec:
  containers:
  - name: nginx
    image: nginx
EOF

Error from server: error when creating "tests/seccomp/allowed.yaml": admission webhook "seccomp.kubewarden.admission" denied the request: Resource violations: Invalid container seccomp profile types: RuntimeDefault

This happens due to the way the policy settings are implemented. When the users define the allowed_profiles settings they are saying "please, check if the annotations in the pod have these profiles" (see the code here). However, Kubernetes updates the securityContext automatically when these annotations are set. Which is causing this problem. A workaround for that is just to replicate the equivalent configuration in the other settings (profile_types and localhost_profiles). Something like:

kubectl apply -f - <<EOF
apiVersion: policies.kubewarden.io/v1alpha2
kind: ClusterAdmissionPolicy
metadata:
  name: seccomp
spec:
  module: registry://ghcr.io/kubewarden/policies/seccomp-psp:v0.1.0
  rules:
    - apiGroups: [""]
      apiVersions: ["v1"]
      resources: ["pods"]
      scope: "Namespaced"
      operations:
      - CREATE
      - UPDATE
  mutating: false
  settings:
    allowed_profiles:
      - runtime/default
    profile_types:
      - RuntimeDefault
EOF

I understand that is a pain. So, I'm opening this to improve that.

jvanz commented

I see some options to fix this issue:

  1. Drop off the allowed_profiles and remove the check for annotations. This means that the policy will work only with k8s 1.19+ versions;
  2. Drop off the profile_types and localhost_profiles. This means that the policy will need to parse the entries to check the container's securityContext;
  3. When validating the container securityContext, if no profile type/file is found, fallback to the allowed_profiles
  4. Dynamically merge the configuration from allowed_profiles and profile_types and localhost_profiles

At first glance, I think that if we can support only k8s 1.19 or later versions we can go with option 1. Otherwise, I would say that option 3 sounds good.

jvanz commented

Another option is to force users to define equivalent allowed_profiles and profile_types/localhost_profile configurations. I believe we can do that in the settings validation function.