lv disappear after reboot
Closed this issue · 1 comments
We use https://github.com/openebs/lvm-localpv to provision persistent volume for kubernetes pods.
After a reboot, we found a lv disappear while other lvs are normal.
The lost lv coundn't be found in /dev/mapper or /dev/lvmvg. There isn't any clue in the openebs pods logs or the lvm services
And found one wired thing is that there's no record of pvc-070f0023-d09e-4041-9cb9-dbd1a579e717 in /etc/lvm/archive.
But the lv is surely created successfully and the pod worked well before restart.
pvscan --cache return no error
root@master1:/etc/lvm/archive# lvm version
LVM version: 2.02.176(2) (2017-11-03)
Library version: 1.02.145 (2017-11-03)
Driver version: 4.41.0
Configuration: ./configure --build=x86_64-linux-gnu --prefix=/usr --includedir=${prefix}/include --mandir=${prefix}/share/man --infodir=${prefix}/share/info --sysconfdir=/etc --localstatedir=/var --disable-silent-rules --libdir=${prefix}/lib/x86_64-linux-gnu --libexecdir=${prefix}/lib/x86_64-linux-gnu --runstatedir=/run --disable-maintainer-mode --disable-dependency-tracking --exec-prefix= --bindir=/bin --libdir=/lib/x86_64-linux-gnu --sbindir=/sbin --with-usrlibdir=/usr/lib/x86_64-linux-gnu --with-optimisation=-O2 --with-cache=internal --with-clvmd=corosync --with-cluster=internal --with-device-uid=0 --with-device-gid=6 --with-device-mode=0660 --with-default-pid-dir=/run --with-default-run-dir=/run/lvm --with-default-locking-dir=/run/lock/lvm --with-thin=internal --with-thin-check=/usr/sbin/thin_check --with-thin-dump=/usr/sbin/thin_dump --with-thin-repair=/usr/sbin/thin_repair --enable-applib --enable-blkid_wiping --enable-cmdlib --enable-cmirrord --enable-dmeventd --enable-dbus-service --enable-lvmetad --enable-lvmlockd-dlm --enable-lvmlockd-sanlock --enable-lvmpolld --enable-notify-dbus --enable-pkgconfig --enable-readline --enable-udev_rules --enable-udev_sync
Could any take a look at this serious issue?
Thanks in advance!
Sorry - but we have absolutely no idea what is the Kubernetes project doing - so we could only help if you have some report from some particular 'lvm2' command - preferably with -vvvv log - and ideally opened as Bugzilla https://bugzilla.redhat.com/enter_bug.cgi?product=LVM%20and%20device-mapper
Also note - you are trying to work with projected dated 2017 - we are already in the halve of 2022 - so please consider to use the recent instance first.
Final note - lvm2 is always meant to be used on host machine - never ever from some containers - as that is category of troubles on its own....