bbogush/nand_programmer

Potential issues on bad block handling

Opened this issue · 1 comments

I was testing a few chips I happened to have on hand and preparing to send in a PR, but I noticed the current bad block handling strategy might not be the best here. Suppose I have a 128MB chip with 4 bad blocks, the read result does not correlate to the actual data on flash, as I have 0x80000 of 0x00 at the very end of the buffer.

image

And if I load a 128MB file, the write operation will cause the program to error out too, because my file was "too large" if you subtract the 4 bad blocks.
image

Usually the "skip bad block" option in other things I've seen just increments the counter and skips the actual r/w operation, so fill bad blocks with FF in buffer for read and skipping writing/erasing entirely (discard data for bad blocks). This way we can still have a consistent, whole 128MB file without having to worry about manually recording the offsets and mangling with the binary with a hex editor to insert/remove blank bits.

Fixed. Skipped bad block are not included in read buffer.