Capacity value doesnot reflect in pvc list/status after updating storage request in PVC
Opened this issue · 5 comments
What happened:
I tried to edit a PVC and increase storage request, but it doesnot increase the storage size of PVC.
What you expected to happen:
I know since we are using NFS, It will use entire NFS volume and not restricted to PVC size, But atleast it should have updated the new capacity value in PV and in PVC for proper visual in PVC and PV list.
How to reproduce it:
- Create a dynamic PVC in K8s using csi-driver-nfs csi-driver.
- Create pod which uses this PVC
- Edit PVC and increase
resources.requests.storage
Anything else we need to know?:
is there a way as workaround to make sure PVC and PV size is reflected after we edit the storage request in PVC without making pod/app down.
Environment:
- CSI Driver version: 4.5.0
- Kubernetes version (use
kubectl version
): 1.29 - OS (e.g. from /etc/os-release): Ubuntu 22.04
- Kernel (e.g.
uname -a
): - Install tools: helm
- Others:
this csi driver does not support resize
Thanks @andyzhangx
Can we add a feature where when we resize a PVC
- It updates the PV size to new size only in metadata (definition)
- It updates PVC status size to new size
I know there is not have a way to restrict the size of data inside a pv directory in nfs server. But atleast this will fix the issue of PVC size not reflecting correct status
@navilg that's a dummy resize, it's better not doing that since this driver actually does not support resize.
Even though it is a dummy resize it is really useful to be able to do that" resync in disk size" before backing up using Velero. This way the restore to another csi will create the right size pvc before restoring otherwise it just fails to restore.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale