kubernetes-csi/csi-driver-nfs

Unable to mount nfs without "nolock" option

dvassilyev-vi opened this issue · 2 comments

What happened:
I installed CSI driver for NFS volumes via generic Helm Chart. After installation it shows running pods as expected:
Screen Shot 2024-02-12 at 9 11 48 PM
I created storage class and put it into K8s:
Screen Shot 2024-02-12 at 9 13 52 PM
But PVC deployment stalls in "Pending" phase. This is what I get if try to describe it:
Screen Shot 2024-02-12 at 9 16 46 PM

What you expected to happen:
I'd like to be able to mount these shared folders without "nolock" options. It can mount NFS points with "nolock" in Storage Class, but is it safe? So I logged in (kubectl exec -it ... /bin/sh) to controller pod and container "nfs", it has no separate "/run" mount point and "/" is mounted read-only (overlayfs). The same for node pod.

How to reproduce it:

Anything else we need to know?:
rpc.statd wants to create a lock file in /run, also rpcbind, and "/var/lib/nfs" might be needed to set rw?

Environment:

  • CSI Driver version: 4.6.0
  • Kubernetes version: 1.27.4 (EKS Anywhere 0.17.4)
  • OS: Bottlerocket OS 1.14.3
  • Kernel: 5.15.117
  • Install tools: helm
  • Others:

If anyone is interested, this happens when your filer has NFS4 disabled. NFS4 doesn't need all these RPC daemons to run. Enabled NFS4 - works as expected.

thanks for the info.