Running out of slabs
andbuitra opened this issue · 1 comments
A VDO volume over LVM was created with the default slabSize (2G) but it grew bigger than expected therefore when trying to extend the physicalsize it complains about too many slabs
Mar 18 14:23:59 backup2-co.conexcol.net kernel: kvdo0:dmsetup: mapToSystemError: mapping internal status code 2072 (kvdo: VDO_TOO_MANY_SLABS: kvdo: Exceeds maximum number of slabs supported) to EIO
Mar 18 14:23:59 backup2-co.conexcol.net kernel: device-mapper: table: 253:1: vdo: Device prepareToGrowPhysical failed (specified physical size too big based on formatted slab size)
Mar 18 14:23:59 backup2-co.conexcol.net kernel: device-mapper: ioctl: error adding target to table
Mar 18 14:23:59 backup2-co.conexcol.net vdo[1637]: ERROR - Device vdo-backup could not be changed; device-mapper: reload ioctl on vdo-backup failed: Input/output error
Mar 18 14:23:59 backup2-co.conexcol.net vdo[1637]: ERROR - device-mapper: reload ioctl on vdo-backup failed: Input/output error
Is it possible to change the slab size on the fly or is that volume hopelessly stuck at its current size? The modify option has no argument for slabSize, only the "create" option. Modifying "/etc/vdoconf.yml" to change slabSize to 32G from 2G didn't work (I assume it's because all the slabs are already created and using the 2G size)
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 40G 0 disk
└─sda1 8:1 0 40G 0 part /
sdb 8:16 0 2T 0 disk
└─vg_backup-lv_backup 253:0 0 18T 0 lvm
└─vdo-backup 253:1 0 20T 0 vdo /backup
sdc 8:32 0 2T 0 disk
└─vg_backup-lv_backup 253:0 0 18T 0 lvm
└─vdo-backup 253:1 0 20T 0 vdo /backup
sdd 8:48 0 2T 0 disk
└─vg_backup-lv_backup 253:0 0 18T 0 lvm
└─vdo-backup 253:1 0 20T 0 vdo /backup
sde 8:64 0 2T 0 disk
└─vg_backup-lv_backup 253:0 0 18T 0 lvm
└─vdo-backup 253:1 0 20T 0 vdo /backup
sdf 8:80 0 2T 0 disk
└─vg_backup-lv_backup 253:0 0 18T 0 lvm
└─vdo-backup 253:1 0 20T 0 vdo /backup
sdg 8:96 0 2T 0 disk
└─vg_backup-lv_backup 253:0 0 18T 0 lvm
└─vdo-backup 253:1 0 20T 0 vdo /backup
sdh 8:112 0 1T 0 disk
└─vg_backup-lv_backup 253:0 0 18T 0 lvm
└─vdo-backup 253:1 0 20T 0 vdo /backup
sdi 8:128 0 1T 0 disk
└─vg_backup-lv_backup 253:0 0 18T 0 lvm
└─vdo-backup 253:1 0 20T 0 vdo /backup
sdj 8:144 0 2T 0 disk
└─vg_backup-lv_backup 253:0 0 18T 0 lvm
└─vdo-backup 253:1 0 20T 0 vdo /backup
sdk 8:160 0 2T 0 disk
└─vg_backup-lv_backup 253:0 0 18T 0 lvm
└─vdo-backup 253:1 0 20T 0 vdo /backup
sr0 11:0 1 1024M 0 rom
vdostats --verbose
/dev/mapper/vdo-backup :
version : 31
release version : 133524
data blocks used : 3586432605
overhead blocks used : 9662628
logical blocks used : 4933717551
physical blocks : 3758088192
logical blocks : 5368709120
1K-blocks : 15032352768
1K-blocks used : 14384380932
1K-blocks available : 647971836
used percent : 95
saving percent : 27
block map cache size : 134217728
write policy : sync
block size : 4096
completed recovery count : 6
read-only recovery count : 0
operating mode : normal
recovery progress (%) : N/A
compressed fragments written : 504
compressed blocks written : 36
compressed fragments in packer : 8
slab count : 7166
slabs opened : 7166
slabs reopened : 1
journal disk full count : 0
journal commits requested count : 0
journal entries batching : 0
journal entries started : 2032
journal entries writing : 0
journal entries written : 2032
journal entries committed : 2032
journal blocks batching : 0
journal blocks started : 15
journal blocks writing : 0
journal blocks written : 15
journal blocks committed : 15
slab journal disk full count : 0
slab journal flush count : 0
slab journal blocked count : 0
slab journal blocks written : 1
slab journal tail busy count : 0
slab summary blocks written : 1
reference blocks written : 0
block map dirty pages : 2
block map clean pages : 32
block map free pages : 32734
block map failed pages : 0
block map incoming pages : 0
block map outgoing pages : 0
block map cache pressure : 0
block map read count : 1345
block map write count : 1016
block map failed reads : 0
block map failed writes : 0
block map reclaimed : 0
block map read outgoing : 0
block map found in cache : 2327
block map discard required : 0
block map wait for page : 0
block map fetch required : 34
block map pages loaded : 34
block map pages saved : 0
block map flush count : 0
dedupe advice valid : 0
dedupe advice stale : 0
concurrent data matches : 0
concurrent hash collisions : 0
invalid advice PBN count : 0
no space error count : 0
read only error count : 0
instance : 0
512 byte emulation : off
current VDO IO requests in progress : 8
maximum VDO IO requests in progress : 514
dedupe advice timeouts : 0
flush out : 0
write amplification ratio : 1.0
bios in read : 963
bios in write : 512
bios in discard : 0
bios in flush : 0
bios in fua : 0
bios in partial read : 0
bios in partial write : 0
bios in partial discard : 0
bios in partial flush : 0
bios in partial fua : 0
bios out read : 833
bios out write : 512
bios out discard : 0
bios out flush : 0
bios out fua : 0
bios meta read : 939191
bios meta write : 118
bios meta discard : 0
bios meta flush : 17
bios meta fua : 16
bios journal read : 0
bios journal write : 15
bios journal discard : 0
bios journal flush : 15
bios journal fua : 15
bios page cache read : 34
bios page cache write : 0
bios page cache discard : 0
bios page cache flush : 0
bios page cache fua : 0
bios out completed read : 833
bios out completed write : 512
bios out completed discard : 0
bios out completed flush : 0
bios out completed fua : 0
bios meta completed read : 939191
bios meta completed write : 118
bios meta completed discard : 0
bios meta completed flush : 0
bios meta completed fua : 0
bios journal completed read : 0
bios journal completed write : 15
bios journal completed discard : 0
bios journal completed flush : 0
bios journal completed fua : 0
bios page cache completed read : 34
bios page cache completed write : 0
bios page cache completed discard : 0
bios page cache completed flush : 0
bios page cache completed fua : 0
bios acknowledged read : 963
bios acknowledged write : 512
bios acknowledged discard : 0
bios acknowledged flush : 0
bios acknowledged fua : 0
bios acknowledged partial read : 0
bios acknowledged partial write : 0
bios acknowledged partial discard : 0
bios acknowledged partial flush : 0
bios acknowledged partial fua : 0
bios in progress read : 0
bios in progress write : 0
bios in progress discard : 0
bios in progress flush : 0
bios in progress fua : 0
KVDO module bytes used : 4435102384
KVDO module peak bytes used : 4444035840
entries indexed : 65471652
posts found : 0
posts not found : 0
queries found : 0
queries not found : 0
updates found : 0
updates not found : 0
current dedupe queries : 0
maximum dedupe queries : 0
Unfortunately there is no way to change the slab size after the creation of the VDO volume. To use a larger slab size (and thus have a larger volume) you'd need to create a new volume and transfer your data.