Unable to attach persistent volume claim to node (where it is attached already) after cluster update
haroldpadilla opened this issue · 0 comments
What did you do?
I updated my DO cluster to 1.16.2-do.2 and mongodb couldn't reconnect to its persistentvolumeclaim. I do have autoscaling enabled on this cluster, I think the problem happened mostly when DO updated rescheduled resources and attached/detached volumes, I noticed it created new nodes (I guess to reduce downtime) while it updated other nodes.
What did you expect to happen?
I expected my cluster to continue working as before the update. Mongodb should be able to connect to the persistentvolumeclaim it had before migration.
When I describe my mondodb pod I get this:
Normal Scheduled <unknown> default-scheduler Successfully assigned upp/test-update-mongodb-5f755dbdfc-vl2hb to mynode-ny-3-pool-h59o
Warning FailedAttachVolume 20m attachdetach-controller Multi-Attach error for volume "pvc-ad303050-47c1-421e-8de1-bb57ff930125" Volume is already exclusively attached to one node and can't be attached to another
Warning FailedMount 20m kubelet, mynode-ny-3-pool-h59o Unable to attach or mount volumes: unmounted volumes=[data custom-init-scripts], unattached volumes=[default-token-47drv data custom-init-scripts]: error processing PVC upp/test-update-mongodb: failed to fetch PVC from API server: persistentvolumeclaims "test-update-mongodb" is forbidden: User "system:node:mynode-ny-3-pool-h59o" cannot get resource "persistentvolumeclaims" in API group "" in the namespace "upp": no relationship found between node "mynode-ny-3-pool-h59o" and this object
Normal SuccessfulAttachVolume 20m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-ad303050-47c1-421e-8de1-bb57ff930125"
Normal Pulled 18m (x5 over 20m) kubelet, mynode-ny-3-pool-h59o Container image "docker.io/bitnami/mongodb:4.0.13-debian-9-r0" already present on machine
Normal Created 18m (x5 over 20m) kubelet, mynode-ny-3-pool-h59o Created container test-update-mongodb
Normal Started 18m (x5 over 20m) kubelet, mynode-ny-3-pool-h59o Started container test-update-mongodb
Warning BackOff 31s (x93 over 20m) kubelet, mynode-ny-3-pool-h59o Back-off restarting failed container
When I go check where is pvc-ad303050-47c1-421e-8de1-bb57ff930125
attached to, it is attached to node mynode-ny-3-pool-h59o
so I don't understand why this is saying the volume can't be attached to the node if it's already attached to it.
Configuration:
-
CSI Version: digitalocean/do-csi-plugin:v1.1.2
-
Kubernetes Version: 1.16.2-do.2
-
Cloud provider/framework version, if applicable (such as Rancher): DO K8