minio/directpv

`FailedMount`: MountVolume.MountDevice failed for volume "pvc-xxx" even though the volume was mounted

usersina opened this issue · 4 comments

Bug description

Setting up directpv with newly formatted drives and trying out the functests/minio.yaml successfully creates PVs and PVCs that work as intended.

However, creating a tenant will always result in a ContainerCreating pod, with the following error message

kubelet MountVolume.MountDevice failed for volume "pvc-32c716c8-a767-4434-83d5-823db386df94" : rpc error: code = Internal desc = unable to bind mount volume directory to staging target path; no such file or directory

  • kubectl describe pod staging-tenant-single-pool-0 -n staging
Type     Reason       Age                   From     Message
----     ------       ----                 ----               -------
Normal   Scheduled    4m6s                 default-scheduler  Successfully assigned staging/staging-tenant-single-pool-0 to kube-controller
Warning  FailedMount  2m10s                kubelet  Unable to attach or mount volumes: unmounted volumes=[data0], unattached volumes=[configuration kube-api-access-qxb8x data0 staging-tenant-tls]: timed out waiting for the condition
Warning  FailedMount  2m3s                 kubelet  MountVolume.MountDevice failed for volume "pvc-32c716c8-a767-4434-83d5-823db386df94" : rpc error: code = Internal desc = unable to bind mount volume directory to staging target path; no such file or directory

The weird thing is that, PVs and PVCs for the tenant were indeed created and Bound

  • kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                        STORAGECLASS        REASON   AGE
pvc-32c716c8-a767-4434-83d5-823db386df94   50Gi       RWO            Delete           Bound    staging/data0-staging-tenant-single-pool-0   directpv-min-io              13m
  • kubectl get pvc -n staging
NAME                                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
data0-staging-tenant-single-pool-0   Bound    pvc-32c716c8-a767-4434-83d5-823db386df94   50Gi       RWO            directpv-min-io   13m

To Reproduce

  1. Install directpv and try out the PV/PVC creation
kubectl apply -f functests/minio.yaml
  1. Install MinIO operator using the Helm Chart
helm repo add minio https://operator.min.io/
helm repo update

helm upgrade \
    --install \
    --namespace minio-operator \
    --create-namespace \
    --version "4.5.8" \
    minio-operator minio/operator
  1. Install MinIO tenant using the Helm Chart
helm upgrade -f config.yaml \
    --install \
    --namespace staging \
    --set secrets.accessKey='minioadmin' \
    --set secrets.secretKey='minioadmin' \
    --set tenant.name='staging-tenant' \
    --set tenant.pools[0].size='50Gi' \
    --version "4.5.8" \
    minio-tenant minio/tenant
config.yaml
secrets:
  name: minio-tenant-env-conf
  # MinIO root user and password
  accessKey: minioadmin
  secretKey: minioadmin

## MinIO Tenant Definition
tenant:
  name: minio-tenant
  image:
    repository: quay.io/minio/minio
    tag: RELEASE.2021-07-30T00-02-00Z
    pullPolicy: IfNotPresent
  configuration:
    name: minio-tenant-env-conf
  pools:
    - servers: 1
      name: single-pool
      volumesPerServer: 1
      size: 50Gi
      storageClassName: directpv-min-io
      resources:
        requests:
          cpu: 1000m
          memory: 2Gi
        limits:
          cpu: 2000m
          memory: 3Gi

  prometheus:
    disabled: true
  log:
    disabled: true

Note:

  • I specifically use version 4.5.8 since I have the following error trying out the newest version, but that's another issue
Users creation failed: Put "https://minio.staging.svc.cluster.local/minio/admin/v3/add-user?accessKey=xxxxxx": dial tcp 10.152.183.83:443: connect: connection refused

Expected behavior

The Pod goes from ContainerCreating state to Ready state and MinIO tenant is accessible.

Deployment information

  • DirectPV version: directpv version v4.0.5
  • Kubernetes Version:
Client Version: v1.27.3
Kustomize Version: v5.0.1
Server Version: v1.26.5
  • OS info: Linux kube-controller 5.4.0-150-generic #167-Ubuntu SMP Mon May 15 17:35:05 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

Additional context

I am using MicroK8s v1.26.5 revision 5395.

In the past, I had to manually update the kubelet path but I assume that is not necessary anymore since the functests/minio.yaml now work without this hack.

# Install kubectl directpv plugin
kubectl krew install directpv

# Generate the .yaml file
kubectl directpv install -o yaml > directpv-install.yaml

# Update the .yaml file by replacing the kubelet path (specific to MicroK8s)
sed -i 's#/var/lib/kubelet/#/var/snap/microk8s/common/var/lib/kubelet/#g' directpv-install.yaml

# Install directpv
kubectl apply -f directpv-install.yaml

Related

Oh sorry about the confusion, even with the examples I have the exact same error. PVs and PVCs are indeed created but the actual minio-0 pod is stuck with the same error

  • kubectl describe pod minio-0
Events:
  Type     Reason       Age                From               Message
  ----     ------       ----               ----               -------
  Normal   Scheduled    28s                default-scheduler  Successfully assigned default/minio-0 to kube-controller
  Warning  FailedMount  12s (x6 over 28s)  kubelet            MountVolume.MountDevice failed for volume "pvc-ec1e9201-8b33-4b1e-8339-6faf6fc2a78d" : rpc error: code = Internal desc = unable to bind mount volume directory to staging target path; no such file or directory
  Warning  FailedMount  12s (x6 over 28s)  kubelet            MountVolume.MountDevice failed for volume "pvc-43bfc01a-7896-4855-a72f-383b8849bdff" : rpc error: code = Internal desc = unable to bind mount volume directory to staging target path; no such file or directory
  Warning  FailedMount  12s (x6 over 27s)  kubelet            MountVolume.MountDevice failed for volume "pvc-0477a2c1-73fd-413f-8ef1-ed556da25ba8" : rpc error: code = Internal desc = unable to bind mount volume directory to staging target path; no such file or directory
  Warning  FailedMount  11s (x6 over 27s)  kubelet            MountVolume.MountDevice failed for volume "pvc-f45b3236-f93c-47f1-9ab0-bcdd773dfe9c" : rpc error: code = Internal desc = unable to bind mount volume directory to staging target path; no such file or directory

This is most likely the same MicroK8s issue.

If your kubelet is running in non-standard path, you need to install directpv like below

$ export KUBELET_DIR_PATH=/path/to/my/kubelet/dir
$ kubectl directpv install

Just uninstall directpv and install like mentioned above.

I have written a personal note that I'll be sharing here in case someone also stumbles upon the same issue.

DirectPV MicroK8s installation

This installation requires some changes to the kubelet. The reason for that is that MicroK8s uses /var/snap/microk8s/common/var/lib/kubelet as the kubelet path.

1. Install the plugin

kubectl krew install directpv

2. Install the driver

  • Step 1: Set the KUBELET_DIR_PATH env var to the one used by MicroK8s
export KUBELET_DIR_PATH=/var/snap/microk8s/common/var/lib/kubelet
  • Step 2: Generate the specification for installing DirectPV
kubectl directpv install
  • Step 3: Verify that the correct kubelet path was used
kubectl get pods -n directpv
kubectl logs node-server-nq6cp -n directpv

# Output should include
# "Kubelet registration probe created" path="/var/snap/microk8s/common/var/lib/kubelet/plugins/directpv-min-io/registration"

3. Discover the drives

kubectl directpv discover
# will generate a `drives.yaml` file

4. Initialize the drives

kubectl directpv init drives.yaml

5. Verify the installation

kubectl directpv info

Verifying the installation

To make sure that directpv is installed correctly, you can deploy the functests/minio.yaml (referred to as minio-test.yaml below)

Creating verification resources

  1. Deploying a test MinIO instance
kubectl apply -f minio-test.yaml

This should create MinIO pods and PVCs using the directpv-min-io storage class.

  1. Verifying resources creation
# Pods should be running
kubectl get pods

# Persistent Volumes and Persistent Volume claims
kubectl get pv
kubectl get pvc
  1. Verifying the creation of paths
# Get the UUID of the drive
directpv list drives -o wide
# Output e.g. DRIVE ID: dcb2ac05-57a6-462b-xxxx-4351139429d7

# List the volumes in the drive
ls -l /var/lib/directpv/mnt/dcb2ac05-57a6-462b-xxxx-4351139429d7

Output example

drwxr-xr-x 3 root root 24 Jun 17 11:49 pvc-b61eb231-de0c-4d9e-6d9e-78801b6494e8
drwxr-xr-x 3 root root 24 Jun 17 11:49 pvc-b8b8cc0a-495e-431b-w8ce-823f054723c4
...

Deleting the resources

  1. Deleting the Pods
kubectl delete -f minio-test.yaml
  1. Deleting the PVCs (hence deleting the PVs)
kubectl delete pvc --selector=app=minio -n default

Please do not delete any PVCs on production without first backing up the data.

Yes, directpv-min-io storage class has reclaim policy set to Delete which will delete the PVs when the corresponding PVCs are deleted.