microsoft/DirectXTex

How is a 128-bit floating point image stored?

WindowsNT opened this issue · 3 comments

I'm using DirectXTex to load a HDR image all right, I get a DXGI_FORMAT_R32G32B32A32_FLOAT which loads fine into my Direct2D HDR context (GUID_WICPixelFormat128bppPRGBAFloat).

My question is how to interpret these values in order to scale them in order to pass them to the H.265 Nvidia 10- bit HDR encoder :

struct RGBX
{
unsigned int b : 10;
unsigned int g : 10;
unsigned int r : 10;
int a : 2;
} rgbx;

I 've asked also here: https://learn.microsoft.com/en-us/answers/questions/1059131/how-is-guid-wicpixelformat128bppprgbafloat-stored.html#comment-1059908.

Multiplying by 1024 doesn't work because the floats are not in the range of [0,1]. By testing various HDR images I saw that the float range is not specified.

I suspect that the mapping isn't linear because there's no multiplication I 've tried that creates any sort of correct image in the encoder.

Good job, btw.

Converting arbitrary floating-point data to quantized 10-bit formats is tone mapping. There are various solutions. For a very basic one, see texconv source code.

The texconv example tonemaps all right but (I think) to 8-bit and not to 10?

The tonemap operator converts the "0 to maxluminance" range to "0 to 1". The DirectXTex library then does the conversion from float 0..1 to either 8-bit UNORM or 10-bit UNORM or any other DXGI_FORMAT. It uses DirectXMath's XMStoreUDecN4 method.