Garbled results when not copying whole rows out of ProcessingPipeline.PixelSource
rickbrew opened this issue · 3 comments
I'm currently using my own custom "super sampling" renderer (which we've discussed on Discord) for my app's Image->Resize command. This renderer only works when down-sizing, so I make use of MagicScaler's Cubic mode if one axis or both axes needs to be lengthened. The lengthening happens first, then the reduction. So MagicScaler is the "inner" pixel source, and my SuperSampler is the "outer" one.
If I resize an image to be taller-but-wider, or shorter-but-less-wide, I get garbled results, e.g.:
When pulling pixels out of the SS resampler, I request full-width rectangles (currently 32px tall). It then sub-divides those rectangles in order to reduce memory usage when it needs to use a temporary buffer for copying pixels from the "inner" source (either the original image, or in this case ProcessingPipeline.PixelSource
because one axis needs to be lengthened first). For the example above, sub-division had to happen twice so we end up with rectangles that are 25% of the full width. The left-most 25% of the image turns out fine, the rest gets garbled but the nature of that corruption changes at the 50% mark.
If I invert the sources, that is MagicScaler is the "outer" source and my SS is the "inner" source, it works fine.
If I only subdivide the rectangles by height, in which case I'm always requesting full-width rects from MagicScaler, everything works fine.
Here's the text image I'm working with: (copyright me, public domain)
I'm changing my code to only sub-divide by height, not width, as per our discussion on Discord. It's the right thing to do whether I'm using MagicScaler or anything else, but I still wanted to point out this as a correctness issue in MagicScaler.
That's a good one. The convolution processor's intermediate buffer uses a sliding window, where sliding the window forward allows it to overwrite the beginning of otherwise valid buffered values. The buffer is failing to invalidate the rows that have been partially overwritten, allowing garbage to be retrieved from them.
The fix is to make sure it invalidates the rows that have been partially overwritten, but of course that means performance will be affected since that part of the buffer has to be reconstructed if you backtrack. So yeah, as long as you don't backtrack between rows that are still live in the buffer you won't hit the invalid data or take the perf hit. But I'll get a fix in regardless for correctness.
fixed in 283584c