Decouple vertex semantics from representation
ColonelThirtyTwo opened this issue · 4 comments
OpenGL vertex attributes are defined in two parts: the representation of the data in the buffer and how to interpret that data. The wiki explains this. For example, you can store u32
's in a vertex buffer and use glVertexAttribPointer(n, 1, GL_UNSIGNED_INT, GL_FALSE, stride, offset)
to read them into an in float
attribute.
Luminance however welds these concepts together - a type has a singular interpretation that it uses, as specified in how it implements luminance::vertex::VertexAttrib
. This is inconvenient in some cases; for example, color data ought to be layed out in a vertex buffer as a four-component u8
vector, and specified using glVertexAttribPointer(n, 4, GL_UNSIGNED_BYTE, GL_TRUE, stride, offset)
so that the shader can accept them as a normalized vec4
, but this is impossible to do in luminance without implementing a custom VertexAttrib
type.
Luminance ought to support specifying the interpretation of the data as part of deriving Semantics
.
For reference, looks like luminance uses glVertexAttribPointer
(float
- vec4
) for f32
/[f32;N]
(types with VertexAttribType::Floating
or VertexAttribType
with Normalized::No
), and glVertexAttribIPointer
(int
-ivec4
) for everything else.
The main goal of the semantics is to tie everything together so that a whole application can be type sound. Storing u32
and read them as a floats is already supported by luminance
, and this is not tied to the semantics, but the vertex type. Example. So no, you are wrong when you say that it has a singular interpretation.
The normalized
attribute in that example is not mentioned in the documentation. Can you add it?
Yes a pass of (re) documentation is required indeed.