Expanded LVM Not Recognized
Closed this issue · 6 comments
Describe the bug
For more flexibility, I’m using a volume group that sits on top of a large hardware RAID volume. Subsequently, I created a Logical Volume (LVM) to serve as a disk for MinIO DirectPV in my Kubernetes cluster. The primary reason for this setup is to split the large hardware RAID, which Linux sees as a single disk, into multiple smaller disks. This way, I can still expand the LVM if needed for flexibility. While using partitions would offer better performance, it’s not as straightforward to expand volumes when necessary. Currently, we’re in a situation where we don’t know how large our volumes will need to be, especially for DirectPV.
However, I’ve encountered a “bug” when expanding the LVM: after expanding the LVM and then extending the XFS file system using xfs_growfs, the DirectPV drive appears to expand correctly. However, when I run kubectl directpv list drives, DirectPV doesn’t recognize the expansion. I’ve attempted to suspend, repair, and resume the disk, but nothing seems to resolve the issue. It appears that the only effective solution is to reboot the node. So that DirectPV sees that the disk has become bigger.
To Reproduce
Use a LVM as DirectPV disk and expand it and growing the xfs filesystem. Then Run kubectl directpv list drives and you will see that DirectPV doesn't notice that the disk is grown.
Expected behavior
Some way to make DirectPV notice that the disk has expanded. Ideally without having to suspend the disks (No downtime).
Deployment information (please complete the following information):
- DirectPV version: v4.0.12
- Kubernetes Version: 1.30.2
- OS info: Red Hat Enterprise Linux 9.4
- Kernel version: 5.14.0-427.31.1.el9_4
Additional context
You could argue that DirectPV is mostly designed for using directly attached disks for performance reasons. However, in some cases, when you have a server with many SAS SSDs, your buses won’t be able to handle this much disk traffic. When using a hardware RAID controller, you can still achieve very high disk speed performance. Of course, using partitions would provide the best performance in that use case. But if you want a bit more flexibility, you could use LVM, which still offers good enough performance for high-performance use cases.
This means udev
in your system doesn't reflect resized LVM. It could be udev bug. Run sudo udevadm control --reload-rules && sudo udevadm trigger
to refresh udev data on the fly for buggy system.
Tried it, but unfortunately, this doesn’t fix the problem.
You would need to report to your OS vendor. Nothing can be done at DirectPV side.
So, if udevadm behaves correctly, kubectl directpv list drives will show the correct size of the disk?
Do you have some more information about what the udevadm is doing wrong and where DirectPV gets the info from or how this works so i can create an issue for Red Hat as the OS vendor.
Also is there an OS like Debian 12, Ubuntu24 etc... Where you know this will work correctly i will test it out then.
DirectPV reads data from /run/udev/data/
directory. This directory is maintained by udev. WRT the problem you don't need anything from DirectPV. Just expand LVM and check whether respective /run/udev/data/b<major>.<minor>
file has expanded size. you could report this to your OS vendor.
I am not sure about debian/ubuntu LVM/udev behavior.
Thanks for the information about the udev and the help.