ArgumentException in Bc7Dds.Decode
RunDevelopment opened this issue · 4 comments
When opening a BC7 encoded file, I get the following error.
Ausnahme ausgelöst: "System.ArgumentException" in Pfim.dll: 'Offset and length were out of bounds for the array or count is greater than the number of elements from index to the end of the source collection.'
bei System.Buffer.BlockCopy(Array src, Int32 srcOffset, Array dst, Int32 dstOffset, Int32 count)
bei Pfim.dds.Bc7Dds.Decode(Byte[] stream, Byte[] data, Int32 streamIndex, UInt32 dataIndex, UInt32 stride)
bei Pfim.CompressedDds.DataDecode(Stream stream, PfimConfig config)
bei Pfim.CompressedDds.Decode(Stream stream, PfimConfig config)
bei Pfim.Dds.DecodeDds(Stream stream, PfimConfig config, DdsHeader header)
bei Pfim.Pfim.FromFile(String path, PfimConfig config)
bei DS3TexUpUI.DDSImage.Load(String file) in C:\Users\micha\Git\DS3TexUp\DS3TexUpUI\DDSConverter.cs: Zeile132
I suspect that this happens because the input image (16384x8192) is quite large (137MB on disk, 683MB in memory assuming 4 bytes per pixel). However, there should still be some room until an int32 overflows, so maybe the size isn't the cause.
Unfortunately, I can't upload the image here because it's too large.
Does it happen with any dds that is that size? Let me know if there is a way to generate an image on my side that will trigger the exception.
Yes, any image that size will do. To test this, I generated a random image 16384x8192 and saved it as BC7 SRGB compressed with mipmaps. When trying to open it, I got the same error.
Apologies for neglecting this issue. The root problem is that the amount of data needed to decode a row exceeds the buffer size. This can be worked around by increasing the buffer size
new PfimConfig(bufferSize: 0x10000);
A short term solution may be to have pfim run buffer reads with max(config.bufferSize, bytesNeededToDecodeRow)
. But I can see how that may be less than desirable if one expects constant sized buffer to be pooled, and instead keeps seeing different sizes from large images.
But a longer term solution should probably allow rows to be split across multiple reads.
Thank you @nickbabcock!