ceph.com/rbd provisioner doesn't resizing pvc size
qurname2 opened this issue · 3 comments
Problem:
I try to resize my pvc prometheus-db, but receive strange error 110
kd pvc prometheus-db
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning VolumeResizeFailed 28s (x145 over 18h) volume_expand error expanding volume "monitoring/prometheus-db" of plugin "kubernetes.io/rbd": rbd info failed, error: exit status 110
Expected behavior:
Volume should be resized if I changed spec.resources.requests.storage inside pvc yaml file.
What I have?
Versions
kubernetes version: 1.14.3
rbd-provisioner: quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11
controller-manager: gcr.io/google_containers/hyperkube:v1.14.3
OS
lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 9.11 (stretch)
Release: 9.11
Codename: stretch
uname -a
Linux master-1 4.9.0-9-amd64 #1 SMP Debian 4.9.168-1+deb9u5 (2019-08-11) x86_64 GNU/Linux
Storage Class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
name: dynamic
parameters:
adminId: admin
adminSecretName: ceph-secret
adminSecretNamespace: kube-system
imageFormat: "2"
monitors: 1.2.3.4:6789,5.6.7.8:6789,9.10.11.12:6789
pool: kube_test
userId: kube
userSecretName: ceph-user-secret
provisioner: ceph.com/rbd
allowVolumeExpansion: true
reclaimPolicy: Retain
volumeBindingMode: Immediate
Secret exist
kg secrets -n kube-system ceph-secret
NAME TYPE DATA AGE
ceph-secret kubernetes.io/rbd 1 227d
Debug:
Tryied to create /etc/ceph/ceph.conf inside rbd-provisioner container:
[root@rbd-provisioner-59c8fcb8fb-tlqz2 ceph]# ls
ceph.conf keyring rbdmap
[root@rbd-provisioner-59c8fcb8fb-tlqz2 ceph]# rbd showmapped
id pool image snap device
0 kube_test kubernetes-dynamic-pvc-ca1ffbc5-4f6c-11ea-a63a-e2155e00fea2 - /dev/rbd0
[root@rbd-provisioner-59c8fcb8fb-tlqz2 ceph]# rbd info kubernetes-dynamic-pvc-ca1ffbc5-4f6c-11ea-a63a-e2155e00fea2 -p kube_test
rbd image 'kubernetes-dynamic-pvc-ca1ffbc5-4f6c-11ea-a63a-e2155e00fea2':
size 100 GiB in 25600 objects
order 22 (4 MiB objects)
block_name_prefix: rb.0.5f9ad562.238e1f29
format: 1
I searched for such issues on the github, but I didn't find issue with an error of 110, unfortunately.
Does anyone have any ideas?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Thanks for reporting the issue!
This repo is no longer being maintained and we are in the process of archiving this repo. Please see kubernetes/org#1563 for more details.
If your issue relates to nfs provisioners, please create a new issue in https://github.com/kubernetes-sigs/nfs-ganesha-server-and-external-provisioner or https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner.
Going to close this issue in order to archive this repo. Apologies for the churn and thanks for your patience! 🙏