minio/directpv

Can't mount volumes on Ubuntu/MicroK8s

c0c0n3 opened this issue · 4 comments

c0c0n3 commented

Describe the bug

After following the DirectPV installation procedure and adding drives, deploying a very basic pod with a DirectPV-backed PVC fails because the DirectPV provisioned volume can't be mounted on the pod.

To Reproduce

Detailed steps in the comment below.

Expected behavior

The volume should be mounted on the pod and the container should be able to read/write from/to it.

Screenshots and logs

Logs in the comment below.

Deployment information

  • DirectPV version: 4.0.6
  • Kubernetes Version: 1.27.2
  • OS info: Ubuntu 22.04.2 LTS
  • Kernel version: Linux 5.15.0-1041-azure

Additional context

See comment below.

c0c0n3 commented

Hello :-)

I've been trying to get DirectPV to work on an Azure Ubuntu VM with MicroK8s on it. The DirectPV installation goes smoothly, DirectPV picks up my drives, formats and adds them to its stash, when asked to provision a volume it does it, but then for some reason the volume can't be mounted on the pod. Gory details below :-)

VM setup

I spun up an Azure VM

  • Arch: x86_64; Model: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz)
  • OS: Ubuntu 22.04.2 LTS---kernel: Linux 5.15.0-1041-azure
  • 3 x 20GB SSD raw drives attached to the VM: /dev/sdc, /dev/sdd, /dev/sde

then did a run-of-the-mill MicroK8s 1.27.2 install.

DirectPV install

I installed the cluster software using the plugin as explained in your docs.

$ kubectl directpv install
$ kubectl directpv discover
$ kubectl directpv init --dangerous drives.yaml

DirectPV picked up my drives correctly and formatted them.

$ kubectl directpv info
┌─────────────┬──────────┬───────────┬─────────┬────────┐
│ NODE        │ CAPACITY │ ALLOCATED │ VOLUMES │ DRIVES │
├─────────────┼──────────┼───────────┼─────────┼────────┤
│ • tv-teadal │ 60 GiB   │ 0 B       │ 0       │ 3      │
└─────────────┴──────────┴───────────┴─────────┴────────┘

0 B/60 GiB used, 0 volumes, 3 drives

Volume mount test

After having installed DirectPV, I tried running a basic test to make sure I could use DirectPV-backed storage. I defined a basic PVC with a storage class of directpv-min-io and a pod to use it in a test namespace. Here's the YAML I used.

pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc
spec:
  storageClassName: directpv-min-io
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Mi

pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  containers:
  - name: test-container
    image: busybox
    command: ['sh', '-c', 'echo "howzit!" > /mnt/test.txt && sleep 3600']
    volumeMounts:
      - name: storage
        mountPath: /mnt
  volumes:
    - name: storage
      persistentVolumeClaim:
        claimName: test-pvc

namespace.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: test

kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: test

resources:
- namespace.yaml
- pod.yaml
- pvc.yaml

I created the resources in the cluster with

$ kustomize build  | kubectl apply -f -

Outcome

Provisioning

The provisioning phase seemed to work fine.

$ kubectl -n directpv logs deployment/controller
I0715 16:22:03.969118       1 controller.go:1337] provision "test/test-pvc" class "directpv-min-io": started
W0715 16:22:03.969383       1 controller.go:620] "fstype" is deprecated and will be removed in a future release, please use "csi.storage.k8s.io/fstype" instead
I0715 16:22:03.969932       1 event.go:285] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"test", Name:"test-pvc", UID:"7b93a399-ee06-4c92-9ca3-cd55b3ed698e", APIVersion:"v1", ResourceVersion:"1860076", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "test/test-pvc"
I0715 16:22:04.036035       1 controller.go:826] create volume rep: {CapacityBytes:10485760 VolumeId:pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e VolumeContext:map[fstype:xfs] ContentSource:<nil> AccessibleTopology:[segments:<key:"directpv.min.io/identity" value:"directpv-min-io" > segments:<key:"directpv.min.io/node" value:"tv-teadal" > segments:<key:"directpv.min.io/rack" value:"default" > segments:<key:"directpv.min.io/region" value:"default" > segments:<key:"directpv.min.io/zone" value:"default" > ] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0715 16:22:04.036269       1 controller.go:923] successfully created PV pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e for PVC test-pvc and csi volume name pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e
I0715 16:22:04.036386       1 controller.go:1442] provision "test/test-pvc" class "directpv-min-io": volume "pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e" provisioned
I0715 16:22:04.036460       1 controller.go:1455] provision "test/test-pvc" class "directpv-min-io": succeeded
I0715 16:22:04.049074       1 event.go:285] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"test", Name:"test-pvc", UID:"7b93a399-ee06-4c92-9ca3-cd55b3ed698e", APIVersion:"v1", ResourceVersion:"1860076", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e
$ kubectl -n test get pvc
NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
test-pvc   Bound    pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e   10Mi       RWO            directpv-min-io   3m31s
$ kubectl directpv list drives
┌───────────┬──────┬───────────────────┬────────┬────────┬─────────┬────────┐
│ NODE      │ NAME │ MAKE              │ SIZE   │ FREE   │ VOLUMES │ STATUS │
├───────────┼──────┼───────────────────┼────────┼────────┼─────────┼────────┤
│ tv-teadal │ sdc  │ Msft Virtual_Disk │ 20 GiB │ 20 GiB │ 1       │ Ready  │
│ tv-teadal │ sdd  │ Msft Virtual_Disk │ 20 GiB │ 20 GiB │ -       │ Ready  │
│ tv-teadal │ sde  │ Msft Virtual_Disk │ 20 GiB │ 20 GiB │ -       │ Ready  │
└───────────┴──────┴───────────────────┴────────┴────────┴─────────┴────────┘

Mounting

But then the pod got stuck in the container creation phase.

kubectl -n test get pod
NAME       READY   STATUS              RESTARTS   AGE
test-pod   0/1     ContainerCreating   0          13m

A quick look at the volume revealed it was still pending.

$ kubectl directpv list volumes --all
┌──────────────────────────────────────────┬──────────┬───────────┬───────┬─────────┬──────────────┬─────────┐
│ VOLUME                                   │ CAPACITY │ NODE      │ DRIVE │ PODNAME │ PODNAMESPACE │ STATUS  │
├──────────────────────────────────────────┼──────────┼───────────┼───────┼─────────┼──────────────┼─────────┤
│ pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e │ 10 MiB   │ tv-teadal │ sdc   │ -       │ -            │ Pending │
└──────────────────────────────────────────┴──────────┴───────────┴───────┴─────────┴──────────────┴─────────┘

And K8s was't able to mount it on the pod.

$ kubectl -n test get event
LAST SEEN   TYPE      REASON        OBJECT         MESSAGE
2m7s        Warning   FailedMount   pod/test-pod   Unable to attach or mount volumes: unmounted volumes=[storage], unattached volumes=[], failed to process volumes=[]: timed out waiting for the condition
54s         Warning   FailedMount   pod/test-pod   MountVolume.MountDevice failed for volume "pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e" : rpc error: code = Internal desc = unable to bind mount volume directory to staging target path; no such file or directory

Here's the node server's logs.

$ kubectl -n directpv logs node-server-fls2j -c node-server
I0715 16:22:05.200378 1109735 stage_unstage.go:37] "Stage volume requested" volumeID="pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e" StagingTargetPath="/var/snap/microk8s/common/var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/db2317a54ea8b086672e8559a73ebeec0fbf672dc8e07f3f3cbbfc2e82efdd0b/globalmount"
I0715 16:22:05.226068 1109735 quota_linux.go:230] "SetQuota succeeded" Device="/dev/sdc" Path="/var/lib/directpv/mnt/c0f6796d-f079-4f55-a599-cacc06818033/.FSUUID.c0f6796d-f079-4f55-a599-cacc06818033/pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e" VolumeID="pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e" ProjectID=2769757147 HardLimit=10485760
E0715 16:22:05.229750 1109735 grpc.go:85] "GRPC failed" err="rpc error: code = Internal desc = unable to bind mount volume directory to staging target path; no such file or directory"
I0715 16:22:05.845035 1109735 stage_unstage.go:37] "Stage volume requested" volumeID="pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e" StagingTargetPath="/var/snap/microk8s/common/var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/db2317a54ea8b086672e8559a73ebeec0fbf672dc8e07f3f3cbbfc2e82efdd0b/globalmount"
I0715 16:22:05.865510 1109735 quota_linux.go:199] "Quota is already set" Device="/dev/sdc" Path="/var/lib/directpv/mnt/c0f6796d-f079-4f55-a599-cacc06818033/.FSUUID.c0f6796d-f079-4f55-a599-cacc06818033/pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e" VolumeID="pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e" ProjectID=2769757147 HardLimit=10485760
E0715 16:22:05.868424 1109735 grpc.go:85] "GRPC failed" err="rpc error: code = Internal desc = unable to bind mount volume directory to staging target path; no such file or directory"
I0715 16:22:07.009554 1109735 stage_unstage.go:37] "Stage volume requested" volumeID="pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e" StagingTargetPath="/var/snap/microk8s/common/var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/db2317a54ea8b086672e8559a73ebeec0fbf672dc8e07f3f3cbbfc2e82efdd0b/globalmount"
I0715 16:22:07.013901 1109735 quota_linux.go:199] "Quota is already set" Device="/dev/sdc" Path="/var/lib/directpv/mnt/c0f6796d-f079-4f55-a599-cacc06818033/.FSUUID.c0f6796d-f079-4f55-a599-cacc06818033/pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e" VolumeID="pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e" ProjectID=2769757147 HardLimit=10485760
E0715 16:22:07.033984 1109735 grpc.go:85] "GRPC failed" err="rpc error: code = Internal desc = unable to bind mount volume directory to staging target path; no such file or directory"
I0715 16:22:09.110370 1109735 stage_unstage.go:37] "Stage volume requested" volumeID="pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e" StagingTargetPath="/var/snap/microk8s/common/var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/db2317a54ea8b086672e8559a73ebeec0fbf672dc8e07f3f3cbbfc2e82efdd0b/globalmount"
I0715 16:22:09.113818 1109735 quota_linux.go:199] "Quota is already set" Device="/dev/sdc" Path="/var/lib/directpv/mnt/c0f6796d-f079-4f55-a599-cacc06818033/.FSUUID.c0f6796d-f079-4f55-a599-cacc06818033/pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e" VolumeID="pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e" ProjectID=2769757147 HardLimit=10485760
E0715 16:22:09.116409 1109735 grpc.go:85] "GRPC failed" err="rpc error: code = Internal desc = unable to bind mount volume directory to staging target path; no such file or directory"
I0715 16:22:13.173372 1109735 stage_unstage.go:37] "Stage volume requested" volumeID="pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e" StagingTargetPath="/var/snap/microk8s/common/var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/db2317a54ea8b086672e8559a73ebeec0fbf672dc8e07f3f3cbbfc2e82efdd0b/globalmount"
I0715 16:22:13.177550 1109735 quota_linux.go:199] "Quota is already set" Device="/dev/sdc" Path="/var/lib/directpv/mnt/c0f6796d-f079-4f55-a599-cacc06818033/.FSUUID.c0f6796d-f079-4f55-a599-cacc06818033/pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e" VolumeID="pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e" ProjectID=2769757147 HardLimit=10485760
E0715 16:22:13.182758 1109735 grpc.go:85] "GRPC failed" err="rpc error: code = Internal desc = unable to bind mount volume directory to staging target path; no such file or directory"
I0715 16:22:21.243992 1109735 stage_unstage.go:37] "Stage volume requested" volumeID="pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e" StagingTargetPath="/var/snap/microk8s/common/var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/db2317a54ea8b086672e8559a73ebeec0fbf672dc8e07f3f3cbbfc2e82efdd0b/globalmount"
I0715 16:22:21.248626 1109735 quota_linux.go:199] "Quota is already set" Device="/dev/sdc" Path="/var/lib/directpv/mnt/c0f6796d-f079-4f55-a599-cacc06818033/.FSUUID.c0f6796d-f079-4f55-a599-cacc06818033/pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e" VolumeID="pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e" ProjectID=2769757147 HardLimit=10485760
E0715 16:22:21.262248 1109735 grpc.go:85] "GRPC failed" err="rpc error: code = Internal desc = unable to bind mount volume directory to staging target path; no such file or directory"
I0715 16:22:37.324837 1109735 stage_unstage.go:37] "Stage volume requested" volumeID="pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e" StagingTargetPath="/var/snap/microk8s/common/var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/db2317a54ea8b086672e8559a73ebeec0fbf672dc8e07f3f3cbbfc2e82efdd0b/globalmount"
I0715 16:22:37.329967 1109735 quota_linux.go:199] "Quota is already set" Device="/dev/sdc" Path="/var/lib/directpv/mnt/c0f6796d-f079-4f55-a599-cacc06818033/.FSUUID.c0f6796d-f079-4f55-a599-cacc06818033/pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e" VolumeID="pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e" ProjectID=2769757147 HardLimit=10485760
E0715 16:22:37.332866 1109735 grpc.go:85] "GRPC failed" err="rpc error: code = Internal desc = unable to bind mount volume directory to staging target path; no such file or directory"
I0715 16:23:09.353992 1109735 stage_unstage.go:37] "Stage volume requested" volumeID="pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e" StagingTargetPath="/var/snap/microk8s/common/var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/db2317a54ea8b086672e8559a73ebeec0fbf672dc8e07f3f3cbbfc2e82efdd0b/globalmount"
I0715 16:23:09.357935 1109735 quota_linux.go:199] "Quota is already set" Device="/dev/sdc" Path="/var/lib/directpv/mnt/c0f6796d-f079-4f55-a599-cacc06818033/.FSUUID.c0f6796d-f079-4f55-a599-cacc06818033/pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e" VolumeID="pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e" ProjectID=2769757147 HardLimit=10485760
E0715 16:23:09.363070 1109735 grpc.go:85] "GRPC failed" err="rpc error: code = Internal desc = unable to bind mount volume directory to staging target path; no such file or directory"
I0715 16:24:13.426580 1109735 stage_unstage.go:37] "Stage volume requested" volumeID="pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e" StagingTargetPath="/var/snap/microk8s/common/var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/db2317a54ea8b086672e8559a73ebeec0fbf672dc8e07f3f3cbbfc2e82efdd0b/globalmount"
I0715 16:24:13.431780 1109735 quota_linux.go:199] "Quota is already set" Device="/dev/sdc" Path="/var/lib/directpv/mnt/c0f6796d-f079-4f55-a599-cacc06818033/.FSUUID.c0f6796d-f079-4f55-a599-cacc06818033/pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e" VolumeID="pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e" ProjectID=2769757147 HardLimit=10485760
E0715 16:24:13.434186 1109735 grpc.go:85] "GRPC failed" err="rpc error: code = Internal desc = unable to bind mount volume directory to staging target path; no such file or directory"
E0715 16:24:42.185392 1109735 controller.go:173] volume pvc-eb7ba232-9760-434f-b44b-4383aaae4c67 must be released before cleaning up
E0715 16:25:09.983128 1109735 controller.go:173] volume pvc-830b7fbf-2146-4a1a-83f2-e39fe06f6194 must be released before cleaning up
I0715 16:26:15.518865 1109735 stage_unstage.go:37] "Stage volume requested" volumeID="pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e" StagingTargetPath="/var/snap/microk8s/common/var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/db2317a54ea8b086672e8559a73ebeec0fbf672dc8e07f3f3cbbfc2e82efdd0b/globalmount"
I0715 16:26:15.594228 1109735 quota_linux.go:199] "Quota is already set" Device="/dev/sdc" Path="/var/lib/directpv/mnt/c0f6796d-f079-4f55-a599-cacc06818033/.FSUUID.c0f6796d-f079-4f55-a599-cacc06818033/pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e" VolumeID="pvc-7b93a399-ee06-4c92-9ca3-cd55b3ed698e" ProjectID=2769757147 HardLimit=10485760
E0715 16:26:15.598150 1109735 grpc.go:85] "GRPC failed" err="rpc error: code = Internal desc = unable to bind mount volume directory to staging target path; no such file or directory"
c0c0n3 commented

Hope there's enough info for you to figure out what's wrong, but shout if you need anything else :-)
Thanks sooo much!

If your kubelet is running in non-standard path, you need to install directpv like below

$ export KUBELET_DIR_PATH=/path/to/my/kubelet/dir
$ kubectl directpv install

Just uninstall directpv and install like mentioned above. Make sure you are providing correct value to KUBELET_DIR_PATH environment variable.

c0c0n3 commented

@balamurugana that worked like a charm, thanks soooo much!

Just for the record, in case someone bumps into this in the future, MicroK8s 1.27 actually creates the customary K8s directory structure under /var/lib, except it symlinks dirs to /var/snap/microk8s/common/var/lib, e.g.

$ ls -al /var/lib/kubelet
lrwxrwxrwx 1 root root 41 Jul  7 09:31 /var/lib/kubelet -> /var/snap/microk8s/common/var/lib/kubelet

If you generate DirectPV 4.0.6 manifests with

$ kubectl directpv install -o yaml > install.yaml

You'll see the various pods in there actually get configured to read/write from/to the customary K8s dirs, e.g. /var/lib/kubelet. If you install DirectPV using the generated YAML file

$ kubectl apply -f install.yaml

everything works as expected---drive discovery, volume provisioning, etc. Except for the last step of mounting a provisioned volume---see issue description above.

Not sure why symlinks cause this weirdness but apparently they do. In fact, if you regenerate the manifests with

$ export KUBELET_DIR_PATH=/var/snap/microk8s/common/var/lib/kubelet
$ kubectl directpv install -o yaml > microk8s-install.yaml

and diff install.yaml with microk8s-install.yaml, you'll see in the latter file the base path for K8s dirs has become the value of KUBELET_DIR_PATH. If you uninstall DirectPV and then install back using the MicroK8s file

$ kubectl apply -f microk8s-install.yaml

everything works as expected---e.g. the volume mount test detailed above passes.