kubernetes-retired/external-storage

[nfs-client-provisioner] No restrictions on PVC

yuchunyun opened this issue · 7 comments

I found that the size of PVC is 1Gi, but I can write unlimited data until the NFS server is full, is there any way to limit the size of PVC is the upper limit of my application?

I have seen the same

@yuchunyun @Stijn98s
check the doc https://github.com/kubernetes-incubator/external-storage/blob/master/nfs/docs/deployment.md

You may also want to enable per-PV quota enforcement. It is based on xfs project level quotas and so requires that the volume mounted at /export be xfs mounted with the prjquota/pquota option. It also requires that it has the privilege to run xfs_quota.

^^Can that option be added via helm? Also does xfs_quota work on btrfs?

@01mcgrady01 i have made an storageclass for the nfs pvc, added the pquota to the mountOptions
and added the -enable-xfs-quota option to the pod. When i check the mount options in the nfs-provisioner i see everything is mounted correctly (on /export type xfs (rw,relatime,attr2,inode64,prjquota)) and everything is running. but when i execute the command df -h i still see the size of the underlaying volume.

when i execute xfs_quota -x -c "report -h" /export`` i get xfs_quota: cannot setup path for mount /export: No such device or address`.

@01mcgrady01 i have made an storageclass for the nfs pvc, added the pquota to the mountOptions
and added the -enable-xfs-quota option to the pod. When i check the mount options in the nfs-provisioner i see everything is mounted correctly (on /export type xfs (rw,relatime,attr2,inode64,prjquota)) and everything is running. but when i execute the command df -h i still see the size of the underlaying volume.

when i execute xfs_quota -x -c "report -h" /export i get ``xfs_quota: cannot setup path for mount /export: No such device or address`.

@Stijn98s
you shoud execute the command on the host where your nfs-provisioner runs.
execute xfs_quota -x -c "report -h" {{your host path}}

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Thanks for reporting the issue!

This repo is no longer being maintained and we are in the process of archiving this repo. Please see kubernetes/org#1563 for more details.

If your issue relates to nfs provisioners, please create a new issue in https://github.com/kubernetes-sigs/nfs-ganesha-server-and-external-provisioner or https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner.

Going to close this issue in order to archive this repo. Apologies for the churn and thanks for your patience! 🙏