GPUOpen-Effects/TressFX

AMD_Types.h should use C99 stdint types

onitake opened this issue · 5 comments

AMD_Types.h contains a series of typedefs that map the C integer types to fixed-size ones, without taking into account different compilers, architectures and OSes. The C or C++ standards mandate no specific size for these types.

Instead, the C99 standard types from stdint.h should be used. These are also available via the std:: namespace by including cstdint in C++ code.

This header has been available in VC++ since at least VS2012.

so whats the problem?

@didlie Did you actually read the bug report or do you just want to troll?

Presumably this is because they are being mapped to buffers on the gpu side, and therefore it is not OK for their size to vary. The hardware, and therefore the graphics library, will be depending on a fixed size: https://docs.microsoft.com/en-us/windows/desktop/direct3dhlsl/dx-graphics-hlsl-scalar

@c6burns This is precisely why I'm asking that AMD_Types.h is modified so the types to match exact types on the GPU (or shader code, for that matter). This is independent of hardware or graphics API; HLSL, GLSL, DX11, DX12, OpenGL, Vulkan all define exact type constraints.

But that doesn't apply to C or C++: int can be anything from an 8-bit signed integer to a 64-bit signed integer (or even a more exotic type). It's purely by accident that many compilers on many CPU architectures define it as a 32-bit signed integer.

My mistake, you are 100% correct. It should be mapping the types from stdint.h