kubernetes-csi/csi-driver-nfs

Dynamic provisioning fails when NFSver=3

naorsisense opened this issue · 9 comments

I am trying to deploy csi-driver-nfs dynamically with storageClass and pvc, pvc is not bound
I tried with mountOptions rw,hard,nfsver=3
I also tried without mountOptions at all,

I succeeded to do so with pv,pvc statically are working fine
NFSver 4.1 is working ! the issue is only with NFSver=3

I expected provisioning dynamically will work without specifying any mountOptions.

How to reproduce:
have nfs server with NFSver=3
deploy dynamic provisioning
apply storageclass-nfs.yaml without mountOptions or specify nfsvers=3
apply pvc-nfs-csi-dynamic.yaml

  • CSI Driver version: helm.sh/chart: csi-driver-nfs-v4.4.0
    kubectl version:
    Client Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.9-eks-d1db3c", GitCommit:"d1db3c46e55f95d6a7d3e5578689371318f95ff9", GitTreeState:"clean", BuildDate:"2020-10-20T22:21:03Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.1", GitCommit:"ec73e42cca0cf369574e1cdaaff35401083080d8", GitTreeState:"clean", BuildDate:"2023-06-12T18:43:37Z", GoVersion:"go1.20.3", Compiler:"gc", Platform:"linux/amd64"}

  • OS : "Ubuntu 20.04.4 LTS"

  • Kernel (e.g. uname -a): Linux i24~20.04.1-Ubuntu SMP Thu Apr 7 22:10:15 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

what's the error msg of kubectl describe pvc pvc-name? and pls also collect controller logs: https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/docs/csi-debug.md#case1-volume-createdelete-failed

`Name: pvc-nfs-dynamic
Namespace: default
StorageClass: nfs-csi
Status: Pending
Volume:
Labels:
Annotations: volume.beta.kubernetes.io/storage-provisioner: nfs.csi.k8s.io
volume.kubernetes.io/storage-provisioner: nfs.csi.k8s.io
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By:
Events:
Type Reason Age From Message


Normal ExternalProvisioning 8s (x3 over 12s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "nfs.csi.k8s.io" or manually created by system administrator
Warning ProvisioningFailed 2s nfs.csi.k8s.io_aks-build-66019925-vmss000000_6ee8c126-7306-47e3-8940-9d686f96bde0 failed to provision volume with StorageClass "nfs-csi": rpc error: code = DeadlineExceeded desc = context deadline exceeded
Normal Provisioning 1s (x2 over 12s) nfs.csi.k8s.io_aks-build-66019925-vmss000000_6ee8c126-7306-47e3-8940-9d686f96bde0 External provisioner is provisioning volume for claim "default/pvc-nfs-dynami`

I have a similar issue

Same Issue for me. Was working fine a week ago. Now I have issues initializing a new PVC. Status is pending.

PVC messages in the description look like this:

...waiting for a volume to be created, either by external provisioner "nfs.csi.k8s.io" or manually created by system administrator
...External provisioner is provisioning volume for claim "default/gridfs-pvc-2"
...failed to provision volume with StorageClass "gridfs-xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx": rpc error: code = DeadlineExceeded desc = context deadline exceeded

Edit: - csi driver nfs: --version v4.0.0
- nfs server version: - nfsvers=4.1

I already tried to helm uninstall and reinstall the csi driver nfs und made a new StorageClass. Didn't help.

csi-nfs-controller-description.log csi-nfs-controller.log csi-nfs-node-description.log csi-nfs-node.log

@naorsisense from the controller logs,

E0807 14:34:07.964925       1 utils.go:111] GRPC error: rpc error: code = Internal desc = failed to mount nfs server: rpc error: code = Internal desc = mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs -o nfsvers=4.1 10.225.0.4:/SisenseStorage-aks-1-26-34 /tmp/pvc-e812af64-c3c1-4fbb-8773-5acd33a1e5a9
Output: mount.nfs: Protocol not supported

that means you have not installed nfs-utils package on the node.

Hi, @andyzhangx , thank you for the replay

i think aks cluster nodes came with the nfs-common installed
the solution helped for me is to add to the SC the following option :

mountOptions:

  • nolock

thanks !

@naorsisense its work thanks.

can confirm this worked for me as well. not having nolock as a mount option did not affect dynamic provisioning for Deployments, but did prevent StatefulSets from provisioning dynamic PVs. Recreating the storageclass with mount option of nolock allowed StatefulSets to create PVs.