stackrox/kube-linter

[BUG] Check: privilege-escalation-container falsely reports privilege escalation capability despite `allowPrivilegeEscalation: false` being set on the container

tspearconquest opened this issue · 2 comments

System info:

  • OS: [e.g. Linux? MaxOS? Windows?]
    Linux and MacOS

Describe the bug
A clear and concise description of the bug.

Despite adding allowPrivilegeEscalation: false to a container via values.yaml and confirming that the rendered kubernetes manifest applies properly without privileges, I have found that the privilege-escalation-container check in kube-linter flags the container in question has having privilege escalation allowed:

falco/charts/falco/templates/daemonset.yaml: (object: falco/falco apps/v1, Kind=DaemonSet) container "falco-driver-loader" has SYS_ADMIN capability and allows privilege escalation. (check: privilege-escalation-container, remediation: Ensure containers do not allow privilege escalation by setting allowPrivilegeEscalation=false." See https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ for more details.)

To Reproduce
Steps to reproduce the behavior:

Given the below yaml input, simply run kube-linter and validate that the above message is printed out, then validate that the below yaml input contains allowPrivilegeEscalation: false in the driver loader initContainer's securityContext field.

Sample YAML input

---
# Source: falco/charts/falco/templates/daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: falco
  namespace: falco
  labels:
    helm.sh/chart: falco-3.1.2
    app.kubernetes.io/name: falco
    app.kubernetes.io/instance: falco
    app.kubernetes.io/version: "0.34.1"
    app.kubernetes.io/managed-by: Helm
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: falco
      app.kubernetes.io/instance: falco
  template:
    metadata:
      name: falco
      labels:
        app.kubernetes.io/name: falco
        app.kubernetes.io/instance: falco
        app: falco
      annotations:
        checksum/config: 63f13255f526026ed2cc648169f8150f1d5588d7e5edf9c11aa327a37f8d2431
        checksum/rules: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
        checksum/certs: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
        container.apparmor.security.beta.kubernetes.io/falco: runtime/default
        container.apparmor.security.beta.kubernetes.io/falco-driver-loader: runtime/default
        container.apparmor.security.beta.kubernetes.io/falcoctl-artifact-follow: runtime/default
        container.apparmor.security.beta.kubernetes.io/falcoctl-artifact-install: runtime/default
    spec:
      serviceAccountName: falco
      securityContext:
        capabilities:
          drop:
          - ALL
        seccompProfile:
          type: RuntimeDefault
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      tolerations:
        - key: CriticalInfra
          operator: Exists
      imagePullSecrets:
        - name: my-registry
      containers:
        - name: falco
          image:falcosecurity/falco-no-driver:0.34.1
          imagePullPolicy: Always
          resources:
            limits:
              memory: 1024Mi
            requests:
              cpu: 100m
              memory: 128Mi
          securityContext:
            capabilities:
              drop:
              - ALL
            privileged: true
            readOnlyRootFilesystem: true
            runAsGroup: 65532
            seccompProfile:
              type: RuntimeDefault
          args:
            - /usr/bin/falco
            - --cri
            - /run/containerd/containerd.sock
            - --cri
            - /run/crio/crio.sock
            - -K
            - /var/run/secrets/kubernetes.io/serviceaccount/token
            - -k
            - https://$(KUBERNETES_SERVICE_HOST)
            - --k8s-node
            - "$(FALCO_K8S_NODE_NAME)"
            - -pk
          env:
            - name: FALCO_K8S_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            - name: FALCO_BPF_PROBE
              value:
          tty: false
          livenessProbe:
            initialDelaySeconds: 30
            timeoutSeconds: 5
            periodSeconds: 15
            httpGet:
              path: /healthz
              port: 8765
          readinessProbe:
            initialDelaySeconds: 0
            timeoutSeconds: 5
            periodSeconds: 15
            httpGet:
              path: /healthz
              port: 8765
          volumeMounts:
            - mountPath: /etc/falco
              name: rulesfiles-install-dir
            - mountPath: /root/.falco
              name: root-falco-fs
            - mountPath: /host/proc
              name: proc-fs
            - name: debugfs
              mountPath: /sys/kernel/debug
            - mountPath: /host/var/run/docker.sock
              name: docker-socket
            - mountPath: /host/run/containerd/containerd.sock
              name: containerd-socket
            - mountPath: /host/run/crio/crio.sock
              name: crio-socket
            - mountPath: /etc/falco/falco.yaml
              name: falco-yaml
              subPath: falco.yaml

            - mountPath: /run/falco
              name: grpc-socket-dir
            - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
              name: falco
              readOnly: true
        - name: falcoctl-artifact-follow
          image: falcosecurity/falcoctl:0.4.0
          imagePullPolicy: Always
          args:
            - artifact
            - follow
            - --verbose
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
              - ALL
            privileged: false
            readOnlyRootFilesystem: true
            runAsGroup: 65532
            runAsNonRoot: true
            runAsUser: 65532
            seccompProfile:
              type: RuntimeDefault
          volumeMounts:
            - mountPath: /plugins
              name: plugins-install-dir
            - mountPath: /rulesfiles
              name: rulesfiles-install-dir
            - mountPath: /etc/falcoctl
              name: falcoctl-config-volume
          env:
      initContainers:
        - name: falco-driver-loader
          image: falcosecurity/falco-driver-loader:0.34.1
          imagePullPolicy: Always
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              add:
              - SYS_ADMIN
              drop:
              - ALL
            privileged: false
            runAsGroup: 65532
            seccompProfile:
              type: RuntimeDefault
          volumeMounts:
            - mountPath: /root/.falco
              name: root-falco-fs
            - mountPath: /host/proc
              name: proc-fs
              readOnly: true
            - mountPath: /host/boot
              name: boot-fs
              readOnly: true
            - mountPath: /host/lib/modules
              name: lib-modules
            - mountPath: /host/usr
              name: usr-fs
              readOnly: true
            - mountPath: /host/etc
              name: etc-fs
              readOnly: true
          env:
            - name: FALCO_BPF_PROBE
              value:
        - name: falcoctl-artifact-install
          image: falcosecurity/falcoctl:0.4.0
          imagePullPolicy: Always
          args:
            - artifact
            - install
            - --verbose
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
              - ALL
            privileged: false
            readOnlyRootFilesystem: true
            runAsGroup: 65532
            runAsNonRoot: true
            runAsUser: 65532
            seccompProfile:
              type: RuntimeDefault
          volumeMounts:
            - mountPath: /plugins
              name: plugins-install-dir
            - mountPath: /rulesfiles
              name: rulesfiles-install-dir
            - mountPath: /etc/falcoctl
              name: falcoctl-config-volume
          env:
      volumes:
        - name: plugins-install-dir
          emptyDir: {}
        - name: rulesfiles-install-dir
          emptyDir: {}
        - name: root-falco-fs
          emptyDir: {}
        - name: boot-fs
          hostPath:
            path: /boot
        - name: lib-modules
          hostPath:
            path: /lib/modules
        - name: usr-fs
          hostPath:
            path: /usr
        - name: etc-fs
          hostPath:
            path: /etc
        - name: debugfs
          hostPath:
            path: /sys/kernel/debug
        - name: docker-socket
          hostPath:
            path: /var/run/docker.sock
        - name: containerd-socket
          hostPath:
            path: /run/containerd/containerd.sock
        - name: crio-socket
          hostPath:
            path: /run/crio/crio.sock
        - name: proc-fs
          hostPath:
            path: /proc
        - name: falcoctl-config-volume
          configMap:
            name: falco-falcoctl
            items:
              - key: falcoctl.yaml
                path: falcoctl.yaml
        - name: falco-yaml
          configMap:
            name: falco
            items:
            - key: falco.yaml
              path: falco.yaml

        - name: grpc-socket-dir
          hostPath:
            path: /run/falco
        - name: falco
          projected:
            sources:
            - configMap:
                name: kube-root-ca.crt
            - downwardAPI:
                items:
                - fieldRef:
                    apiVersion: v1
                    fieldPath: metadata.namespace
                  path: namespace
            - serviceAccountToken:
                path: token
  updateStrategy:
    type: RollingUpdate

Expected behavior
A clear and concise description of what you expected to happen.
Kube-linter should warn about the SYS_ADMIN capability on the container but not under this check and should not report that the container allows privilege escalation.

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
Add any other context about the problem here.

janisz commented

@tspearconquest Thank you for this report. I was able to replicate it and trim the file so kube-linter reports just a single issue:

---
# Source: falco/charts/falco/templates/daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: falco
      app.kubernetes.io/instance: falco
  template:
    metadata:
      name: falco
      labels:
        app.kubernetes.io/name: falco
        app.kubernetes.io/instance: falco
        app: falco
    spec:
      securityContext:
        capabilities:
          drop:
          - ALL
        seccompProfile:
          type: RuntimeDefault
      initContainers:
        - name: falco-driver-loader
          image: falcosecurity/falco-driver-loader:0.34.1
          resources:
            requests:
              memory: "64Mi"
              cpu: "250m"
            limits:
              memory: "128Mi"
              cpu: "500m"
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              add:
              - SYS_ADMIN
              drop:
              - ALL
            privileged: false
            runAsGroup: 65532
            readOnlyRootFilesystem: true
            runAsNonRoot: true
            seccompProfile:
              type: RuntimeDefault

Indeed it reports following error.

KubeLinter 0.6.0-16-g73ceb3009d

<standard input>: (object: <no namespace>/ apps/v1, Kind=DaemonSet) container "falco-driver-loader" has SYS_ADMIN capability and allows privilege escalation. (check: privilege-escalation-container, remediation: Ensure containers do not allow privilege escalation by setting allowPrivilegeEscalation=false."
See https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ for more details.)

Error: found 1 lint errors

According to linked documentation:

allowPrivilegeEscalation is always true when the container:

  • is run as privileged, or
  • has CAP_SYS_ADMIN

I believe this behaviour is correct although wording could be improved. The remediation part does not mention 2 special cases that sets allowPrivilegeEscalation to true. Maybe something like this will be better:

Ensure containers do not allow privilege escalation by setting allowPrivilegeEscalation: false,privileged: false and removing CAP_SYS_ADMIN capability.

Yes, I think this would be more clear. Thanks!