ResourceExhausted: mismatch of Tenant's PVCs vs freeCapacity
zeph opened this issue · 19 comments
Describe the bug
pvc
s do not get fulfilled. The only lead in the logs, after raising directpv logging
to -v=5
is ResourceExhausted desc = no drive found for requested topology
To Reproduce
I have 3 workers that have been added with storage on a cluster of 6.
I labelled the 3 new workers with node-role.kubernetes.io/minio=true
and used
the node select upon installation of the operator...
kubectl directpv install --node-selector node-role.kubernetes.io/minio=true
I discovered the drives and init em, also marking what's hdd and what's ssd
(they are not, it is a simulation on VmWare provisioned drives)
% kubectl directpv label drives disktype=ssd --drives=sd{b,c,d,e}
% kubectl directpv label drives disktype=hdd --drives=sd{f,g,h,i}
getting...
% directpv.stage list drives --output wide --show-labels
┌──────────────────────────┬──────┬─────────────────────┬────────┬────────┬─────────┬────────┬──────────────────────────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ NODE │ NAME │ MAKE │ SIZE │ FREE │ VOLUMES │ STATUS │ DRIVE ID │ LABELS │
├──────────────────────────┼──────┼─────────────────────┼────────┼────────┼─────────┼────────┼──────────────────────────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ mbbf-extern-k8s-worker-4 │ sdf │ VMware Virtual_disk │ 10 GiB │ 10 GiB │ - │ Ready │ ebd5313d-ef0a-493e-bd50-b7087cf49b53 │ access-tier=Default,created-by=directpv-driver,disktype=hdd,drive-name=sdf,node=mbbf-extern-k8s-worker-4,version=v1beta1 │
│ mbbf-extern-k8s-worker-4 │ sdg │ VMware Virtual_disk │ 10 GiB │ 10 GiB │ - │ Ready │ 9f437a30-2431-4549-997d-1eb126d7f63b │ access-tier=Default,created-by=directpv-driver,disktype=hdd,drive-name=sdg,node=mbbf-extern-k8s-worker-4,version=v1beta1 │
│ mbbf-extern-k8s-worker-4 │ sdh │ VMware Virtual_disk │ 10 GiB │ 10 GiB │ - │ Ready │ 18388cbc-f291-40df-b4a4-5f21f658f38d │ access-tier=Default,created-by=directpv-driver,disktype=hdd,drive-name=sdh,node=mbbf-extern-k8s-worker-4,version=v1beta1 │
│ mbbf-extern-k8s-worker-4 │ sdi │ VMware Virtual_disk │ 10 GiB │ 10 GiB │ - │ Ready │ 84a81ed3-5e2e-45cc-a23f-aab6471e6e4e │ access-tier=Default,created-by=directpv-driver,disktype=hdd,drive-name=sdi,node=mbbf-extern-k8s-worker-4,version=v1beta1 │
│ mbbf-extern-k8s-worker-4 │ sdb │ VMware Virtual_disk │ 25 GiB │ 25 GiB │ - │ Ready │ c640b9bb-cdab-4582-8fd3-a12fe7ade9ea │ access-tier=Default,created-by=directpv-driver,disktype=ssd,drive-name=sdb,node=mbbf-extern-k8s-worker-4,version=v1beta1 │
│ mbbf-extern-k8s-worker-4 │ sdc │ VMware Virtual_disk │ 25 GiB │ 25 GiB │ - │ Ready │ dc956005-c68c-454c-82bc-219bf0430e83 │ access-tier=Default,created-by=directpv-driver,disktype=ssd,drive-name=sdc,node=mbbf-extern-k8s-worker-4,version=v1beta1 │
│ mbbf-extern-k8s-worker-4 │ sdd │ VMware Virtual_disk │ 25 GiB │ 25 GiB │ - │ Ready │ 4306c5f5-22d2-457a-bddf-1015942c0dbe │ access-tier=Default,created-by=directpv-driver,disktype=ssd,drive-name=sdd,node=mbbf-extern-k8s-worker-4,version=v1beta1 │
│ mbbf-extern-k8s-worker-4 │ sde │ VMware Virtual_disk │ 25 GiB │ 25 GiB │ - │ Ready │ 7df9e86e-9176-4468-8300-d6fd66e69945 │ access-tier=Default,created-by=directpv-driver,disktype=ssd,drive-name=sde,node=mbbf-extern-k8s-worker-4,version=v1beta1 │
│ mbbf-extern-k8s-worker-5 │ sdf │ VMware Virtual_disk │ 10 GiB │ 10 GiB │ - │ Ready │ 7a83fcba-e04a-453a-9e3e-5f0f50e415cb │ access-tier=Default,created-by=directpv-driver,disktype=hdd,drive-name=sdf,node=mbbf-extern-k8s-worker-5,version=v1beta1 │
│ mbbf-extern-k8s-worker-5 │ sdg │ VMware Virtual_disk │ 10 GiB │ 10 GiB │ - │ Ready │ a8adab8f-ef59-4f29-87e6-1a93c9920ee1 │ access-tier=Default,created-by=directpv-driver,disktype=hdd,drive-name=sdg,node=mbbf-extern-k8s-worker-5,version=v1beta1 │
│ mbbf-extern-k8s-worker-5 │ sdh │ VMware Virtual_disk │ 10 GiB │ 10 GiB │ - │ Ready │ da347aad-c8d5-4af9-9ea4-aa5440b8679b │ access-tier=Default,created-by=directpv-driver,disktype=hdd,drive-name=sdh,node=mbbf-extern-k8s-worker-5,version=v1beta1 │
│ mbbf-extern-k8s-worker-5 │ sdi │ VMware Virtual_disk │ 10 GiB │ 10 GiB │ - │ Ready │ ea9466da-3faa-4ba2-bfdc-1ba66393ed6a │ access-tier=Default,created-by=directpv-driver,disktype=hdd,drive-name=sdi,node=mbbf-extern-k8s-worker-5,version=v1beta1 │
│ mbbf-extern-k8s-worker-5 │ sdb │ VMware Virtual_disk │ 25 GiB │ 25 GiB │ - │ Ready │ 86c2a0fa-a5dd-4b08-a8ec-af0ce89a27a8 │ access-tier=Default,created-by=directpv-driver,disktype=ssd,drive-name=sdb,node=mbbf-extern-k8s-worker-5,version=v1beta1 │
│ mbbf-extern-k8s-worker-5 │ sdc │ VMware Virtual_disk │ 25 GiB │ 25 GiB │ - │ Ready │ cd571970-ff8b-47b0-a1f7-ee974b20efc4 │ access-tier=Default,created-by=directpv-driver,disktype=ssd,drive-name=sdc,node=mbbf-extern-k8s-worker-5,version=v1beta1 │
│ mbbf-extern-k8s-worker-5 │ sdd │ VMware Virtual_disk │ 25 GiB │ 25 GiB │ - │ Ready │ dce8daf3-4681-450e-bd71-eb9147825754 │ access-tier=Default,created-by=directpv-driver,disktype=ssd,drive-name=sdd,node=mbbf-extern-k8s-worker-5,version=v1beta1 │
│ mbbf-extern-k8s-worker-5 │ sde │ VMware Virtual_disk │ 25 GiB │ 25 GiB │ - │ Ready │ 81c52ff1-2eed-4d4d-8563-1bc8ecf4446c │ access-tier=Default,created-by=directpv-driver,disktype=ssd,drive-name=sde,node=mbbf-extern-k8s-worker-5,version=v1beta1 │
│ mbbf-extern-k8s-worker-6 │ sdf │ VMware Virtual_disk │ 10 GiB │ 10 GiB │ - │ Ready │ 7f9bd6b8-a318-4e22-8cc6-949b8d4d67d4 │ access-tier=Default,created-by=directpv-driver,disktype=hdd,drive-name=sdf,node=mbbf-extern-k8s-worker-6,version=v1beta1 │
│ mbbf-extern-k8s-worker-6 │ sdg │ VMware Virtual_disk │ 10 GiB │ 10 GiB │ - │ Ready │ 84db6f9c-4180-4eb1-9a45-b9304c5be44c │ access-tier=Default,created-by=directpv-driver,disktype=hdd,drive-name=sdg,node=mbbf-extern-k8s-worker-6,version=v1beta1 │
│ mbbf-extern-k8s-worker-6 │ sdh │ VMware Virtual_disk │ 10 GiB │ 10 GiB │ - │ Ready │ aeaed335-2b8b-4c08-a168-702de57c9f39 │ access-tier=Default,created-by=directpv-driver,disktype=hdd,drive-name=sdh,node=mbbf-extern-k8s-worker-6,version=v1beta1 │
│ mbbf-extern-k8s-worker-6 │ sdi │ VMware Virtual_disk │ 10 GiB │ 10 GiB │ - │ Ready │ 3a771c42-95fe-470a-9d53-204cee918dfc │ access-tier=Default,created-by=directpv-driver,disktype=hdd,drive-name=sdi,node=mbbf-extern-k8s-worker-6,version=v1beta1 │
│ mbbf-extern-k8s-worker-6 │ sdb │ VMware Virtual_disk │ 25 GiB │ 25 GiB │ - │ Ready │ aeb58bcb-a5cc-40a1-aa3b-02014faa430d │ access-tier=Default,created-by=directpv-driver,disktype=ssd,drive-name=sdb,node=mbbf-extern-k8s-worker-6,version=v1beta1 │
│ mbbf-extern-k8s-worker-6 │ sdc │ VMware Virtual_disk │ 25 GiB │ 25 GiB │ - │ Ready │ 6d59f5fe-28ce-4689-aaf8-7782cd366deb │ access-tier=Default,created-by=directpv-driver,disktype=ssd,drive-name=sdc,node=mbbf-extern-k8s-worker-6,version=v1beta1 │
│ mbbf-extern-k8s-worker-6 │ sdd │ VMware Virtual_disk │ 25 GiB │ 25 GiB │ - │ Ready │ 04d14895-1424-460b-a0e1-ed1089fc27eb │ access-tier=Default,created-by=directpv-driver,disktype=ssd,drive-name=sdd,node=mbbf-extern-k8s-worker-6,version=v1beta1 │
│ mbbf-extern-k8s-worker-6 │ sde │ VMware Virtual_disk │ 25 GiB │ 25 GiB │ - │ Ready │ 1a8911b9-01ae-49ce-bcea-f9ca360cf95e │ access-tier=Default,created-by=directpv-driver,disktype=ssd,drive-name=sde,node=mbbf-extern-k8s-worker-6,version=v1beta1 │
└──────────────────────────┴──────┴─────────────────────┴────────┴────────┴─────────┴────────┴──────────────────────────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
% kubectl directpv info
┌────────────────────────────┬──────────┬───────────┬─────────┬────────┐
│ NODE │ CAPACITY │ ALLOCATED │ VOLUMES │ DRIVES │
├────────────────────────────┼──────────┼───────────┼─────────┼────────┤
│ • mbbf-extern-k8s-worker-4 │ 140 GiB │ 0 B │ 0 │ 8 │
│ • mbbf-extern-k8s-worker-5 │ 140 GiB │ 0 B │ 0 │ 8 │
│ • mbbf-extern-k8s-worker-6 │ 140 GiB │ 0 B │ 0 │ 8 │
└────────────────────────────┴──────────┴───────────┴─────────┴────────┘
0 B/420 GiB used, 0 volumes, 24 drives
then I created the StorageClasses
./create-storage-class.sh ssd-tier-storage 'disktype: ssd'
./create-storage-class.sh hdd-tier-storage 'disktype: hdd'
and I got Minio Operator installed ... kubectl minio init
kubectl minio version
v5.0.6
and finally I created the minio tenant via the UI... giving the node selector and the antiAffinity for the pods
(since I spotted in other 2 tickets that the issue was related to that... here are snippets from its STS.... )
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/minio
operator: In
values:
- "true"
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: v1.min.io/tenant
operator: In
values:
- stage-hdd
- key: v1.min.io/pool
operator: In
values:
- pool-1
topologyKey: kubernetes.io/hostname
and 4x 25GiB drives on 3 pods... for a total of 300GiB
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
name: data0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "26843545600"
storageClassName: hdd-tier-storage
volumeMode: Filesystem
status:
phase: Pending
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
name: data1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "26843545600"
storageClassName: hdd-tier-storage
volumeMode: Filesystem
status:
phase: Pending
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
name: data2
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "26843545600"
storageClassName: hdd-tier-storage
volumeMode: Filesystem
status:
phase: Pending
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
name: data3
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "26843545600"
storageClassName: hdd-tier-storage
volumeMode: Filesystem
status:
phase: Pending
Expected behavior
I'm expecting the Volumes to be allocated, instead they do not.
Screenshots and logs
Just ask me what to add more... I believe you have most of it right above.
Just adding a snippet from the logs to see the failing requests... % stern controller -n directpv
controller-54f6fd599d-6stbd controller I0927 13:27:41.583807 1 server.go:131] "Create volume requested" name="pvc-32a36f61-ce2b-41d6-a651-f5d4319bdf09" requiredBytes="26,843,545,600"
controller-54f6fd599d-6stbd controller E0927 13:27:41.587890 1 grpc.go:85] "GRPC failed" err="rpc error: code = ResourceExhausted desc = no drive found for requested topology"
controller-54f6fd599d-6stbd controller E0927 13:27:41.589918 1 grpc.go:85] "GRPC failed" err="rpc error: code = ResourceExhausted desc = no drive found for requested topology"
controller-54f6fd599d-6stbd controller E0927 13:27:41.593280 1 grpc.go:85] "GRPC failed" err="rpc error: code = ResourceExhausted desc = no drive found for requested topology"
controller-54f6fd599d-6stbd controller E0927 13:27:41.600775 1 grpc.go:85] "GRPC failed" err="rpc error: code = ResourceExhausted desc = no drive found for requested topology"
controller-54f6fd599d-6stbd controller I0927 13:27:51.576501 1 server.go:131] "Create volume requested" name="pvc-8517e20b-ba6c-4d23-9ff7-04c84bb77964" requiredBytes="26,843,545,600"
controller-54f6fd599d-6stbd controller I0927 13:27:51.582882 1 server.go:131] "Create volume requested" name="pvc-930206ec-b1cb-4e44-b23c-d7a2e11946a6" requiredBytes="26,843,545,600"
controller-54f6fd599d-6stbd csi-provisioner W0927 13:14:28.114571 1 controller.go:620] "fstype" is deprecated and will be removed in a future release, please use "csi.storage.k8s.io/fstype" instead
controller-54f6fd599d-6stbd csi-provisioner I0927 13:14:28.114664 1 event.go:285] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"stage-hdd-tenant", Name:"data3-stage-hdd-pool-0-2", UID:"a12ac873-f119-4d5f-82fd-e784b29aead5", APIVersion:"v1", ResourceVersion:"210209274", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "stage-hdd-tenant/data3-stage-hdd-pool-0-2"
controller-54f6fd599d-6stbd csi-provisioner I0927 13:14:28.121580 1 event.go:285] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"stage-hdd-tenant", Name:"data1-stage-hdd-pool-0-2", UID:"9b899fcd-3dce-446e-b326-694a45fd8c20", APIVersion:"v1", ResourceVersion:"210209271", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "hdd-tier-storage": rpc error: code = ResourceExhausted desc = no drive found for requested topology
controller-54f6fd599d-6stbd csi-provisioner I0927 13:14:28.122598 1 controller.go:1429] provision "stage-hdd-tenant/data0-stage-hdd-pool-0-2" class "hdd-tier-storage": volume rescheduled because: failed to provision volume with StorageClass "hdd-tier-storage": rpc error: code = ResourceExhausted desc = no drive found for requested topology
controller-54f6fd599d-6stbd csi-provisioner I0927 13:14:28.126677 1 controller.go:1429] provision "stage-hdd-tenant/data1-stage-hdd-pool-0-2" class "hdd-tier-storage": volume rescheduled because: failed to provision volume with StorageClass "hdd-tier-storage": rpc error: code = ResourceExhausted desc = no drive found for requested topology
controller-54f6fd599d-6stbd csi-provisioner I0927 13:14:28.132119 1 event.go:285] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"stage-hdd-tenant", Name:"data3-stage-hdd-pool-0-2", UID:"a12ac873-f119-4d5f-82fd-e784b29aead5", APIVersion:"v1", ResourceVersion:"210209274", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "hdd-tier-storage": rpc error: code = ResourceExhausted desc = no drive found for requested topology
Deployment information (please complete the following information):
- DirectPV version: v4.0.6
- Kubernetes Version: v1.24.15
- OS info: Ubuntu 20.04.3 LTS
- Kernel version: 5.4.0-144-generic
Additional context
This is a simulation setup on VMWare/Rancher before getting the real physical machines
Are you able to create a simple setup mentioned in https://github.com/minio/directpv/blob/master/docs/volume-provisioning.md using both storage classes?
doh! @balamurugana that worked right away... % kubectl describe pvc sleep-pvc
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 33s (x2 over 36s) persistentvolume-controller waiting for first consumer to be created before binding
Normal ExternalProvisioning 30s persistentvolume-controller waiting for a volume to be created, either by external provisioner "directpv-min-io" or manually created by system administrator
Normal Provisioning 30s directpv-min-io_controller-54f6fd599d-6stbd_8eb9e7a2-976a-43f2-b972-fcf99dbc3076 External provisioner is provisioning volume for claim "stage-hdd-tenant/sleep-pvc"
Normal ProvisioningSucceeded 30s directpv-min-io_controller-54f6fd599d-6stbd_8eb9e7a2-976a-43f2-b972-fcf99dbc3076 Successfully provisioned volume pvc-b9f9a7be-01c9-43ac-b9d7-5eb86237951f
% kubectl directpv list volumes
┌──────────────────────────────────────────┬──────────┬──────────────────────────┬───────┬───────────┬──────────────────┬─────────┐
│ VOLUME │ CAPACITY │ NODE │ DRIVE │ PODNAME │ PODNAMESPACE │ STATUS │
├──────────────────────────────────────────┼──────────┼──────────────────────────┼───────┼───────────┼──────────────────┼─────────┤
│ pvc-b9f9a7be-01c9-43ac-b9d7-5eb86237951f │ 8.0 MiB │ mbbf-extern-k8s-worker-5 │ sdc │ sleep-pod │ stage-hdd-tenant │ Bounded │
└──────────────────────────────────────────┴──────────┴──────────────────────────┴───────┴───────────┴──────────────────┴─────────┘
I just removed the nodeSelector on the STS from the Minio Tenant but doesn't seem to help
...I already did the test selecting a lower disk size, that didn't help either... I'm pretty puzzled
@zeph looks like minio operator sets topology constraints and such topology is not set in DirectPV.
nevermind... 24 GiB and it worked... but I'm loosing quite some space this way... the overall is 288.0 GiB instead of 300
@balamurugana I don't get ur last comment...
nevermind... 24 GiB and it worked... but I'm loosing quite some space this way... the overall is 288.0 GiB instead of 300
@zeph I see. The size constraint did not meet for volume claim previously. Please close the issue if things are working good for you.
thanks for the follow up @balamurugana !
p.s. shall I create a ticket for the mismatch in RAW capacity? cause the number were quite exact
@zeph When adding drives to DirectPV, those drives are formatted to XFS and some space are reserved for XFS metadata. It is not possible to use raw capacity anyway.
@balamurugana I know, nevertheless:
- on tenant creation I have to give 299 instead of 300 on the wizard asking for RAW capacity (this way it works) ...and it creates PVCs of 24.9 each
- on the old tenant, created giving 300... I attempted to set 24 on the PVCs and it worked (see above) but I have no way to input 24.9 as the wizard did when doing it from scratch...
so, noone is arguing about the reserved space for XFS, but there is a < check somewhere instead of a <=
...also a bug on the wizard UI limiting the input to not provide a .9 in the field
You could open an issue in minio-operator
for this.
@balamurugana for the UI, sure... but the allocator of directpv is checking for a value < of size of disk, instead of <=
(and has nothing to do with the reservation of the space for XFS, it is totally arbitrary the 24.9 vs 25 we entered above)
The check at https://github.com/minio/directpv/blob/master/pkg/csi/controller/utils.go#L51 works correctly. There is no issue at DirectPV side. You could use freeCapacity
of one of directpvdrives
by looking kubectl get directpvdrives -o yaml
and validate this by the doc https://github.com/minio/directpv/blob/master/docs/volume-provisioning.md
@balamurugana actually both at the line you pointed out, and here too https://github.com/minio/directpv/blob/master/pkg/csi/controller/server.go#L332 there is just a <
and not <=
which is excluding exactly the situation of a perfect match between capacity available and capacity requested
shall I reopen this ticket or make a new one? ;-)
As I mentioned earlier, there is no issue at DirectPV side. Below is a test how it works
[root@master ~]# ./kubectl-directpv_4.0.8_linux_amd64 list drives --all -o wide
┌────────┬──────┬──────┬─────────┬─────────┬─────────┬────────┬──────────────────────────────────────┐
│ NODE │ NAME │ MAKE │ SIZE │ FREE │ VOLUMES │ STATUS │ DRIVE ID │
├────────┼──────┼──────┼─────────┼─────────┼─────────┼────────┼──────────────────────────────────────┤
│ master │ vdb │ - │ 512 MiB │ 509 MiB │ - │ Ready │ 432aade7-c7a2-4236-bb14-75621d3ae6a7 │
└────────┴──────┴──────┴─────────┴─────────┴─────────┴────────┴──────────────────────────────────────┘
[root@master ~]# kubectl get directpvdrives -o yaml
apiVersion: v1
items:
- apiVersion: directpv.min.io/v1beta1
kind: DirectPVDrive
metadata:
creationTimestamp: "2023-09-27T15:45:18Z"
finalizers:
- directpv.min.io/data-protection
generation: 1
labels:
directpv.min.io/access-tier: Default
directpv.min.io/created-by: directpv-driver
directpv.min.io/drive-name: vdb
directpv.min.io/node: master
directpv.min.io/version: v1beta1
name: 432aade7-c7a2-4236-bb14-75621d3ae6a7
resourceVersion: "20223"
uid: 013891a7-a077-4d43-b3ce-98d6f8aaa2c6
spec: {}
status:
allocatedCapacity: 0
freeCapacity: 533254144
fsuuid: 432aade7-c7a2-4236-bb14-75621d3ae6a7
status: Ready
topology:
directpv.min.io/identity: directpv-min-io
directpv.min.io/node: master
directpv.min.io/rack: default
directpv.min.io/region: default
directpv.min.io/zone: default
totalCapacity: 536870912
kind: List
metadata:
resourceVersion: ""
[root@master ~]# cat sleep.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: sleep-pvc
spec:
volumeMode: Filesystem
storageClassName: directpv-min-io
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 533254144
---
apiVersion: v1
kind: Pod
metadata:
name: sleep-pod
spec:
volumes:
- name: sleep-volume
persistentVolumeClaim:
claimName: sleep-pvc
containers:
- name: sleep-container
image: example.org/test/sleep:v0.0.1
volumeMounts:
- mountPath: "/mnt"
name: sleep-volume
---
[root@master ~]# kubectl apply -f sleep.yaml
persistentvolumeclaim/sleep-pvc created
pod/sleep-pod created
[root@master ~]# ./kubectl-directpv_4.0.8_linux_amd64 list volumes --all -o wide
┌──────────────────────────────────────────┬──────────┬────────┬───────┬───────────┬──────────────┬─────────┬──────────────────────────────────────┐
│ VOLUME │ CAPACITY │ NODE │ DRIVE │ PODNAME │ PODNAMESPACE │ STATUS │ DRIVE ID │
├──────────────────────────────────────────┼──────────┼────────┼───────┼───────────┼──────────────┼─────────┼──────────────────────────────────────┤
│ pvc-48bd6349-4c81-48bd-b58f-628d3aa03aaa │ 509 MiB │ master │ vdb │ sleep-pod │ default │ Bounded │ 432aade7-c7a2-4236-bb14-75621d3ae6a7 │
└──────────────────────────────────────────┴──────────┴────────┴───────┴───────────┴──────────────┴─────────┴──────────────────────────────────────┘
[root@master ~]# ./kubectl-directpv_4.0.8_linux_amd64 list drives --all -o wide
┌────────┬──────┬──────┬─────────┬──────┬─────────┬────────┬──────────────────────────────────────┐
│ NODE │ NAME │ MAKE │ SIZE │ FREE │ VOLUMES │ STATUS │ DRIVE ID │
├────────┼──────┼──────┼─────────┼──────┼─────────┼────────┼──────────────────────────────────────┤
│ master │ vdb │ - │ 512 MiB │ - │ 1 │ Ready │ 432aade7-c7a2-4236-bb14-75621d3ae6a7 │
└────────┴──────┴──────┴─────────┴──────┴─────────┴────────┴──────────────────────────────────────┘
[root@master ~]#
tnx for the above command... I could not spot the raw capacity of the drives in bytes
status:
allocatedCapacity: 13238272
freeCapacity: 26830307328
fsuuid: dce8daf3-4681-450e-bd71-eb9147825754
make: VMware Virtual_disk
status: Ready
topology:
directpv.min.io/identity: directpv-min-io
directpv.min.io/node: mbbf-extern-k8s-worker-5
directpv.min.io/rack: default
directpv.min.io/region: default
directpv.min.io/zone: default
totalCapacity: 26843545600
freeCapacity: 26830307328 != totalCapacity: 26843545600
26843545600/1024^3 is exactly 25 GiB and that's why it is failing
at the end of the day what counts is the freeCapacity
and not the totalCapacity
of the drive
annoying, but that clarifies it
ok, I'll create a ticket for the WebUI to be able with digits after the dot to get as close as possible to full disk capacity
p.s. thanks @balamurugana for the prompt responses, the patience and the time you did put into answering me