kubernetes-retired/external-storage

NFS provisionner trigger I/O error on my hardrive

mamiapatrick opened this issue · 1 comments

Hello,

I use nfs-provsionner on my RKE/K8S cluster and when i deploy it on my VM. The nodes start to have many errors with theses message

blk_update_request: I/O error, dev xvda || XVDB , sector xxxxxx

The problem is so critical that after some hours, many nodes just crashed and i was obliged to remove the pod. Do someone experience this, how could i fix that? I need to setup Openebs RWM (read write Many) Volumes.
20200129_202243

This was an issue related to my storage pool and nothing to do with NFS. With a thick provisionning the virtual disk was not the same size with the physical allocation so the disk was full unless the virtual drive was not