Leverage all alignment patterns to improve reliability on crumpled paper
davidnx opened this issue · 1 comments
The current implementation completely ignores alignment patterns other than the bottom right one, which causes significant unreliability when reading large (version 7+) QR's printed on non-flat paper. Even small amounts of crumpling, sometimes unnoticeable to a human observer, are enough to prevent recognition.
Has there been any thought into how the library could leverage those? It seems like all the building blocks are in place, and all we would have to do would be to break down a QR into multiple PerspectiveTransform
's that can each be computed using alignment patterns spread around the QR, interpolating over the ones that cannot be found.
Would be nice to hear what others who have played with QR recognition algo's longer might have in mind.
Yeah I think what you have described is the exactly right approach. I think there may additionally be some nuance around given a "grid" of alignment patterns figuring how best to leverage them into perspective transforms, since there could be missing/obscured ones.
Very interesting in accepting PRs and proposals on this work.