lvconvert --repair -yf doesn't refresh logical volume
Opened this issue · 0 comments
Version:
uname -r 6.1.77 lvm version LVM version: 2.03.11(2) (2021-01-08) Library version: 1.02.175 (2021-01-08) Driver version: 4.47.0 Configuration: ./configure --build=x86_64-linux-gnu --prefix=/usr --includedir=${prefix}/include --mandir=${prefix}/share/man --infodir=${prefix}/share/info --sysconfdir=/etc --localstatedir=/var --disable-option-checking --disable-silent-rules --libdir=${prefix}/lib/x86_64-linux-gnu --runstatedir=/run --disable-maintainer-mode --disable-dependency-tracking --libdir=/lib/x86_64-linux-gnu --sbindir=/sbin --with-usrlibdir=/usr/lib/x86_64-linux-gnu --with-optimisation=-O2 --with-cache=internal --with-device-uid=0 --with-device-gid=6 --with-device-mode=0660 --with-default-pid-dir=/run --with-default-run-dir=/run/lvm --with-default-locking-dir=/run/lock/lvm --with-thin=internal --with-thin-check=/usr/sbin/thin_check --with-thin-dump=/usr/sbin/thin_dump --with-thin-repair=/usr/sbin/thin_repair --with-udev-prefix=/ --enable-applib --enable-blkid_wiping --enable-cmdlib --enable-dmeventd --enable-editline --enable-lvmlockd-dlm --enable-lvmlockd-sanlock --enable-lvmpolld --enable-notify-dbus --enable-pkgconfig --enable-udev_rules --enable-udev_sync --disable-readline --with-vdo=internal --with-writecache=internal lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.6 LTS Release: 20.04 Codename: focal
Hello everyone. I have the following case. I created a volume group (named raid5) of three physical volumes and created two logical volumes in this volume group
lvcreate --type raid5 --nosync -i 2 -L 300G -I 64K -n vol1 raid5 -y lvcreate --type raid5 --nosync -i 2 -L 300G -I 64K -n vol2 raid5 -y
After that, I added another physical volume to this volume group and performed a pvmove to replace the old physical volume with a new physical volume
pvs PV VG Fmt Attr PSize PFree /dev/disk/by-id/scsi-35000cca04e27f588-part1 lvm2 --- <372.61g <372.61g /dev/disk/by-id/scsi-35000cca04e27f5dc-part1 raid5 lvm2 a-- <372.61g <72.60g /dev/disk/by-id/scsi-35000cca04e764154-part1 raid5 lvm2 a-- <372.61g <72.60g /dev/disk/by-id/scsi-35001173101138874-part1 raid5 lvm2 a-- <372.61g <72.60g vgextend raid5 /dev/disk/by-id/scsi-35000cca04e27f588-part1 pvmove -b -i3 /dev/disk/by-id/scsi-35000cca04e27f5dc-part1 /dev/disk/by-id/scsi-35000cca04e27f588-part1
Next, I rebooted the server and, after rebooting, I executed vgchange -ay raid5
. After the synchronization was completed
pvdisplay --- Physical volume --- PV Name /dev/disk/by-id/scsi-35000cca04e27f5dc-part1 VG Name raid5 PV Size <372.61 GiB / not usable <1.09 MiB Allocatable yes PE Size 4.00 MiB Total PE 95387 Free PE 95387 Allocated PE 0 PV UUID JXfmAb-sEsG-yAgO-5ebL-Ciqa-6Y2d-xR4ei5 --- Physical volume --- PV Name /dev/disk/by-id/scsi-35000cca04e764154-part1 VG Name raid5 PV Size <372.61 GiB / not usable <1.09 MiB Allocatable yes PE Size 4.00 MiB Total PE 95387 Free PE 18585 Allocated PE 76802 PV UUID K0Rq2g-RwgE-NJuy-FkSw-4fFP-FVYu-H90uvt --- Physical volume --- PV Name /dev/disk/by-id/scsi-35001173101138874-part1 VG Name raid5 PV Size <372.61 GiB / not usable <1.09 MiB Allocatable yes PE Size 4.00 MiB Total PE 95387 Free PE 18585 Allocated PE 76802 PV UUID 6B49n3-OsFw-Dt4V-1XWR-sB29-4zpi-0noQAB --- Physical volume --- PV Name /dev/disk/by-id/scsi-35000cca04e27f588-part1 VG Name raid5 PV Size <372.61 GiB / not usable <1.09 MiB Allocatable yes PE Size 4.00 MiB Total PE 95387 Free PE 18585 Allocated PE 76802 PV UUID Y5iLkE-bNTd-22Kq-6w3L-fBnF-8l31-huZmkU
Next, I did the following
vgreduce raid5 /dev/disk/by-id/scsi-35000cca04e27f5dc-part1 pvremove /dev/disk/by-id/scsi-35000cca04e27f5dc-part1
Next, I did the following
lvs --reportformat json -o full_name,lv_layout,vg_system_id,copy_percent,lv_health_status { "report": [ { "lv": [ {"lv_full_name":"raid5/vol1", "lv_layout":"raid,raid5,raid5_ls", "vg_systemid":"node1", "copy_percent":"100.00", "lv_health_status":"refresh needed"}, {"lv_full_name":"raid5/vol2", "lv_layout":"raid,raid5,raid5_ls", "vg_systemid":"node1", "copy_percent":"100.00", "lv_health_status":""} ] } ] }
And raid5/vol1
has status "lv_health_status":"refresh needed"
Next, I did lvconvert --repair -yf raid5
, but it didn't help.
lvconvert --repair -yf raid5/vol1 Insufficient free space: 38401 extents needed, but only 0 available Failed to replace faulty devices in raid5/vol1. lvs --reportformat json -o full_name,lv_layout,vg_system_id,copy_percent,lv_health_status { "report": [ { "lv": [ {"lv_full_name":"raid5/vol1", "lv_layout":"raid,raid5,raid5_ls", "vg_systemid":"node1", "copy_percent":"100.00", "lv_health_status":"refresh needed"}, {"lv_full_name":"raid5/vol2", "lv_layout":"raid,raid5,raid5_ls", "vg_systemid":"node1", "copy_percent":"100.00", "lv_health_status":""} ] } ] }
Moreover, I repeated the case, but instead executing vgreduce
and pvremove
, I used lvconvert
. The LV started to repair and after that I saw that /dev/disk/by-id/scsi-35000cca04e27f5dc-part1
has only Free PE 56986 instead 95387
lvs --reportformat json -o full_name,lv_layout,vg_system_id,copy_percent,lv_health_status { "report": [ { "lv": [ {"lv_full_name":"raid5/vol1", "lv_layout":"raid,raid5,raid5_ls", "vg_systemid":"node1", "copy_percent":"100.00", "lv_health_status":"refresh needed"}, {"lv_full_name":"raid5/vol2", "lv_layout":"raid,raid5,raid5_ls", "vg_systemid":"node1", "copy_percent":"100.00", "lv_health_status":""} ] } ] } lvconvert --repair -yf raid5/vol1 Faulty devices in raid5/vol1 successfully replaced. lvs --reportformat json -o full_name,lv_layout,vg_system_id,copy_percent,lv_health_status { "report": [ { "lv": [ {"lv_full_name":"raid5/vol1", "lv_layout":"raid,raid5,raid5_ls", "vg_systemid":"node1", "copy_percent":"0.00", "lv_health_status":""}, {"lv_full_name":"raid5/vol2", "lv_layout":"raid,raid5,raid5_ls", "vg_systemid":"node1", "copy_percent":"100.00", "lv_health_status":""} ] } ] } pvdisplay --- Physical volume --- PV Name /dev/disk/by-id/scsi-35000cca04e27f5dc-part1 VG Name raid5 PV Size <372.61 GiB / not usable <1.09 MiB Allocatable yes PE Size 4.00 MiB Total PE 95387 Free PE 56986 Allocated PE 38401 PV UUID TKzHB5-oG2R-h7Jy-DJm3-MCCb-bSXR-ScR00N --- Physical volume --- PV Name /dev/disk/by-id/scsi-35000cca04e764154-part1 VG Name raid5 PV Size <372.61 GiB / not usable <1.09 MiB Allocatable yes PE Size 4.00 MiB Total PE 95387 Free PE 18585 Allocated PE 76802 PV UUID r300jQ-kvfq-JaYJ-cfcp-n1Y7-Lu5K-wcFxXd --- Physical volume --- PV Name /dev/disk/by-id/scsi-35001173101138874-part1 VG Name raid5 PV Size <372.61 GiB / not usable <1.09 MiB Allocatable yes PE Size 4.00 MiB Total PE 95387 Free PE 56986 Allocated PE 38401 PV UUID l7azMU-3nBo-tfEH-kxJz-Bd29-01WH-VvGC9N --- Physical volume --- PV Name /dev/disk/by-id/scsi-35000cca04e27f588-part1 VG Name raid5 PV Size <372.61 GiB / not usable <1.09 MiB Allocatable yes PE Size 4.00 MiB Total PE 95387 Free PE 18585 Allocated PE 76802 PV UUID g2rKya-AjZG-i5Uh-DOfP-2jEJ-tT0E-JEi3Wv
Why can there be such behavior?