Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AMD_Types.h should use C99 stdint types #36

Open
onitake opened this issue Nov 14, 2018 · 5 comments
Open

AMD_Types.h should use C99 stdint types #36

onitake opened this issue Nov 14, 2018 · 5 comments

Comments

@onitake
Copy link

onitake commented Nov 14, 2018

AMD_Types.h contains a series of typedefs that map the C integer types to fixed-size ones, without taking into account different compilers, architectures and OSes. The C or C++ standards mandate no specific size for these types.

Instead, the C99 standard types from stdint.h should be used. These are also available via the std:: namespace by including cstdint in C++ code.

This header has been available in VC++ since at least VS2012.

@didlie
Copy link

didlie commented Nov 21, 2018

so whats the problem?

@onitake
Copy link
Author

onitake commented Nov 21, 2018

@didlie Did you actually read the bug report or do you just want to troll?

@c6burns
Copy link

c6burns commented Dec 17, 2018

Presumably this is because they are being mapped to buffers on the gpu side, and therefore it is not OK for their size to vary. The hardware, and therefore the graphics library, will be depending on a fixed size: https://docs.microsoft.com/en-us/windows/desktop/direct3dhlsl/dx-graphics-hlsl-scalar

@onitake
Copy link
Author

onitake commented Dec 17, 2018

@c6burns This is precisely why I'm asking that AMD_Types.h is modified so the types to match exact types on the GPU (or shader code, for that matter). This is independent of hardware or graphics API; HLSL, GLSL, DX11, DX12, OpenGL, Vulkan all define exact type constraints.

But that doesn't apply to C or C++: int can be anything from an 8-bit signed integer to a 64-bit signed integer (or even a more exotic type). It's purely by accident that many compilers on many CPU architectures define it as a 32-bit signed integer.

@c6burns
Copy link

c6burns commented Dec 17, 2018

My mistake, you are 100% correct. It should be mapping the types from stdint.h

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants