kubernetes-sigs/aws-ebs-csi-driver

feature support: set 777 filemode to mount directory

Opened this issue · 9 comments

Is your feature request related to a problem? Please describe.
issue description: #2207 (comment)
There are many pods in our cluster that use lvm storage. Although letting users switch to root can solve this issue, it would mean a significant overhaul cost for the users.

Describe the solution you'd like in detail
ebs plugin can grant 777 permission to the mount directory

Additional context
add a flags args (eg: mountPermission) to config the permission for mount directory.

Hi - can you provide more details? The EBS CSI Driver does not currently support LVM volumes - can you clarify the situation under which this would be helpful, or provide a reproduction case?

You can follow the steps below to replicate the issue we're currently facing.

step1. Created a sts resource that uses lvm storage, and the pod was running just fine. yaml as follows:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql-lvm
spec:
  selector:
    matchLabels:
      app: mysql 
  serviceName: "mysql"
  replicas: 1
  template:
    metadata:
      labels:
        app: mysql
    spec:
      securityContext:
        runAsNonRoot: true
        runAsUser: 5000
      containers:
      - image: mysql:8.4
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: password
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
  volumeClaimTemplates:
  - metadata:
      name: mysql-persistent-storage
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "csi-lvm-sc"
      resources:
        requests:
          storage: 1Gi

step2. Replaced the sc in the sts yaml from step 1 with ebs, and then we noticed that the pod couldn't start. yaml as follows:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql-ebs
spec:
  selector:
    matchLabels:
      app: mysql 
  serviceName: "mysql"
  replicas: 1
  template:
    metadata:
      labels:
        app: mysql
    spec:
      securityContext:
        runAsNonRoot: true
        runAsUser: 5000
      containers:
      - image: mysql:8.4
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: password
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
  volumeClaimTemplates:
  - metadata:
      name: mysql-persistent-storage
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "ebs-sc2"
      resources:
        requests:
          storage: 1Gi

log as follows:

2024-11-07 12:08:28+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.4.3-1.el9 started.
2024-11-07 12:08:28+00:00 [Note] [Entrypoint]: Initializing database files
mysqld: Can't create/write to file '/var/lib/mysql/is_writable' (OS errno 13 - Permission denied)
2024-11-07T12:08:28.793268Z 0 [System] [MY-015017] [Server] MySQL Server Initialization - start.
2024-11-07T12:08:28.794330Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.4.3) initializing of server in progress as process 42
2024-11-07T12:08:28.795466Z 0 [ERROR] [MY-010460] [Server] --initialize specified but the data directory exists and is not writable. Aborting.
2024-11-07T12:08:28.795472Z 0 [ERROR] [MY-013236] [Server] The designated data directory /var/lib/mysql/ is unusable. You can remove all files that the server added to it.
2024-11-07T12:08:28.795503Z 0 [ERROR] [MY-010119] [Server] Aborting
2024-11-07T12:08:28.795794Z 0 [System] [MY-015018] [Server] MySQL Server Initialization - end.

We used to have a lot of pods that relied on the lvm plugin, and they worked fine even though the containers were started as non-root users, because the lvm mount directory had 777 permissions. But when we switched to ebs, the pods couldn't start due to the mount directory having 755 permissions. To make it easier for users to switch to ebs, we'd like the permissions for the mount directory to match those of lvm.

we can add a flag args to config the permission for mount directory when start ebs-driver container, 755 still is the default mode, this doesn't affect the original logic, WDYT?

Hi - thanks for the explanation, that provides a more complete picture.

Firstly, a workaround - if you use a fsGroup in your security policy, it should work. For example, instead of:

      securityContext:
        runAsNonRoot: true
        runAsUser: 5000

you would use:

      securityContext:
        runAsNonRoot: true
        runAsUser: 5000
        fsGroup: 5000

Secondly, I do see that it is inconsistent between different CSI drivers what permission the root of a volume is given. I'm going to open a dialogue on the Kubernetes Slack with SIG Storage about this, to see if it's appropriate to standardize on a given value or to allow configuration given the potential security implications. I'll relay any important updates to this GitHub issue.

If EBS CSI sets the root volume to 755 (which we can't configure), and the owner to root (which we also can't configure), doesn't that mean securityContext.fsGroup actually has no meaningful impact on write capabilities?

As in: The only way to get write access is if we change the runAsUser to 0. Group changes make no difference.

The only workaround is chmod 777 <volume_root> in some sidecar or pre_start hooks script.

If EBS CSI sets the root volume to 755 (which we can't configure), and the owner to root (which we also can't configure), doesn't that mean securityContext.fsGroup actually has no meaningful impact on write capabilities?

fsGroup changes the group that owns the volume, see the Kubernetes docs: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#discussion

fsGroup: Volumes that support ownership management are modified to be owned and writable by the GID specified in fsGroup.

@ConnorJC3 correct, but that was not the question. Should I try to rephrase my comment?