Azure/azure-service-operator

Customize pod security context via helm chart

Closed this issue · 4 comments

Describe the current behavior
Currently there is no way to customize the operator's pod security context. We have ASO running in an AKS cluster with Azure Defender for Cloud enabled for that cluster. Defender gives recommendations about pods not running as a root user (by having the runAsUser property set to a non-zero value or by having the runAsNonRoot property set to true). We would like to enforce that recommendation but are currently unable to because we are unable to set either property on the ASO's pod.

Describe the improvement
We would love to have the possibility to customize the ASO's pod's security context via the helm chart, or alternatively to set specific flags via the helm chart (depending on what suits the projects philosophy best).

We probably should just set that option by default, since there's no real reason ASO should ever need to run as root in its container.

I also believe there's an open PR #4207 that may solve this - we haven't accepted it yet and need to discuss what our philosophy is on letting users configure security things versus us just setting the most locked-down values.

We are looking to adopt this in our enterprise and are seeing the following:

admission webhook "validation.gatekeeper.sh" denied the request:
[azurepolicy-k8sazurev3allowedusersgroups]
Container manager is attempting to run without a required
securityContext/runAsNonRoot or securityContext/runAsUser != 0

Aside from this, we have other policies in place, some of which would also error out with the current template. Here is all that is enforced:

  • automountServiceAccountToken must be false.
  • hostIPC: false
  • hostNetwork: false
  • hostPID: false
  • runAsNonRoot: true
  • runAsUser and runAsGroup must not be zero
  • seccompProfile must be set (e.g., RuntimeDefault)
  • containers > allowPrivilegeEscalation must be false
  • containers > drop all capabilities by default
  • containers > privileged false
  • containers > readOnlyRootFilesystem true
  • resource requests/limits must be set

Here's a sample spec that would pass our policies:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: some-name
  name: some-name
spec:
  replicas: 1
  selector:
    matchLabels:
      app: some-name
  strategy: {}
  template:
    metadata:
      labels:
        app: some-name
      annotations:
        container.apparmor.security.beta.kubernetes.io/some-name: runtime/default
    spec:
      automountServiceAccountToken: false
      hostIPC: false
      hostNetwork: false
      hostPID: false
      securityContext:
        runAsUser: 1021
        runAsGroup: 1021
        runAsNonRoot: true
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: some-name
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
              - ALL
            privileged: false
            readOnlyRootFilesystem: true
          image: # hidden
          ports:
            - name: http
              containerPort: 8081
              protocol: TCP
          resources:
            limits:
              cpu: 500m
              memory: 512Mi
            requests:
              cpu: 200m
              memory: 256Mi

This can now be configured (due to #4207), and I've updated the pod spec to follow most of the best practices you called out above (#4242) as well.

There are 3 exceptions:

  1. automountServiceAccountToken cannot be false for ASO. It needs access to the token both for workload identity authentication (the recommended way), as well as to communicate with kube-apiserver.
  2. hostIPC, hostNetwork, and hostPID are all default false, so AFAIK there is no need to explicitly set them to false in the Deployment.
  3. seccompProfile: RuntimeDefault - I think we can set this. I tested a basic scenario with it set and things seemed fine, but I don't fully understand what syscalls it's allowing versus blocking so I didn't enable it by default yet, need to do more investigation to know if there are any that we're at risk of making.

That sounds good. Looking forward to a new helm chart version to test out. Thanks!