microsoft/DirectStorage

ZLib The DirectStorage request failed

Closed this issue · 1 comments

When I compiled in release mode and ran with a 2Gbyte input file, got below error.
What can cause this, something I need to configure in win11?

Compressing N:\Data\data.cow to N:\Data\data.cow.gdeflate in 162x16 MiB chunks
Total: 2716752886 --> 2716882936 bytes (100.005%)
Compressing N:\Data\data.cow to N:\Data\data.cow.zlib in 162x16 MiB chunks
Total: 2716752886 --> 2711941577 bytes (99.8229%)
Uncompressed:
16 MiB staging buffer: .......... 4.10728 GB/s mean cycle time: 370784527
32 MiB staging buffer: .......... 6.78455 GB/s mean cycle time: 308674715
64 MiB staging buffer: .......... 5.65416 GB/s mean cycle time: 339077114
128 MiB staging buffer: .......... 5.47467 GB/s mean cycle time: 363720296
256 MiB staging buffer: .......... 4.65176 GB/s mean cycle time: 405283062
512 MiB staging buffer: .......... 4.09985 GB/s mean cycle time: 448052001
1024 MiB staging buffer: .......... 4.08624 GB/s mean cycle time: 188527878
ZLib:
16 MiB staging buffer: The DirectStorage request failed! HRESULT=0x89240008
13052685 16782342

Q:\git_external\DirectStorage\Samples\GpuDecompressionBenchmark\x64\Release\GpuDecompressionBenchmark.exe (process 9040) exited with code -1073740791.
Press any key to close this window . . .

I think that this is a bug in the sample.

That error code is E_DSTORAGE_REQUEST_TOO_LARGE. This error happens when some part of the request (either the compressed or uncompressed part) is larger than the staging buffer size. As the sample is splitting the source data up into 16MiB chunks, the uncompressed size must be 16MiB, so my guess is that zlib compression has resulted in data that is actually larger than the source data. This can happen - in this case the sample should really detect and handle this case.

In real-life, the right thing to do here would probably be to drop down to just storing that chunk uncompressed.

For the benchmark app, I'm not so sure since the whole point of it is to look at the performance of the codec. Possibly the right thing to do would be to just skip the 16MiB staging buffer case when this happens.

If you want to try debugging this with your data set to see if I'm correct, have a look in GpuDecompressionBenchmark.cpp and search for queue->EnqueueRequest. Maybe there you could add a check to validate that both request.Source.File.Size and request.UncompressedSize are less than stagingSizeMiB * 1024 * 1024 - if they're not then the request will fail and this particular test should be skipped.