Cannot persist large files to ext2 disk correctly
Closed this issue · 3 comments
There is a bug that prevents writing large files to an ext2 disk correctly.
This can be demonstrated with the following commands:
dd if=/dev/random of=bigfile bs=1024 count=150000
sha256sum bigfile
sync
reboot
# After login:
sha256sum bigfile
Result: you will either get an I/O error of a different checksum.
The problem is in the calculation of a block index for a triple-indirect block in fs/ext2/inode.c:
Lines 326 to 333 in 6e036aa
Here, tblock has not been adjusted to account for the number of blocks skipped by the last traversal.
I believe this is the appropriate code to adjust tblock before calculating the block index:
tindblock = (__blk_t *)buf3->data;
tblock -= BLOCKS_PER_DIND_BLOCK(i->sb) * block;
block = tindblock[tblock / BLOCKS_PER_IND_BLOCK(i->sb)];
Without this adjustment, tblock / BLOCKS_PER_IND_BLOCK(i->sb)
will exceed the bounds (0..255) for tindblock and will write into memory beyond the size of the disk block. The block numbers stored in these indexes may be readable while in memory, but they cannot be persisted to disk through a reboot because the disk blocks only hold 256 entries. So, after rebooting and reloading the blocks from disk, indexing beyond 255 will produce an invalid block number.
I was unable reproduce this bug with 256MB of memory, probably because the file was not created completely and the part of the file saved on disk was valid.
With 1024MB the bug appeared immediately.
Thank you.