Why are there different options for glVertexAttribPointer?

there is

glVertexAttribPointer() glVertexAttribIPointer() glVertexAttribLPointer() 

As far as I know, glVertexAttribPointer can be used instead of the other two.

If so, why are there variations of I and L ?

+11
opengl
source share
3 answers

I read about it in OpenGL Insights

When using glVertexAttribPointer() everything becomes great for float. Although glVertexAttribIPointer() can only expose vertex arrays that store integers, glVertexAttribLPointer() only be doubled.

As the quote on this page of OpenGL.org confirms :

For glVertexAttribPointer, if the normalized value is set to GL_TRUE, it indicates that the values โ€‹โ€‹stored in the integer format should be mapped to the range [-1,1] (for signed values) or [0,1] (for unsigned values) when access and conversion to floating point. Otherwise, the values โ€‹โ€‹will be converted to floats directly without normalization.

For glVertexAttribIPointer, only the integer types GL_BYTE, GL_UNSIGNED_BYTE, GL_SHORT, GL_UNSIGNED_SHORT, GL_INT, GL_UNSIGNED_INT are accepted. Values โ€‹โ€‹always remain integer values.

glVertexAttribLPointer sets the state for the general attribute of the vertex array associated with the shader attribute variable declared with the 64-bit double-precision component. type must be GL_DOUBLE. index, size, and stride behave as described for glVertexAttribPointer and glVertexAttribIPointer.

+14
source share

No, they cannot be used instead of each other.

Traditionally, all the attributes of the GL vertices are floating point. The fact that you can enter integer data does not change, because the data is converted to a floating point "on the fly." The normalized parameter controls how the conversion is performed, if it is turned on, the input type range is mapped to normalized [0,1] (for unsigned types, also called UNORM ing GL) or [-1, 1] (for signed types, also called SNORM ), if it is disabled, the value is directly converted to the nearest floating-point value of the input integer.

Since this was the original API, it had to be expanded when different types of attribute data (integers and pairs) were introduced. Also note that attribute pointers are independent of shaders, so the target value cannot be determined by the currently attached shader (if any), since this can later be used with different shaders. Thus, the variant identifier is L for the double/dvec , and variant I is for the int/uint/ivec/uvec .

+4
source share

Take the test and you will understand the difference.

Suppose you are performing a feedback transformation using the following vertex shader:

 #version 450 core layout(location = 0) in int input; layout(xfb_offset = 0) out float output; void main() { output = sqrt(input); } 

And this is your "vertex data":

 GLint data[] = { 1, 2, 3, 4, 5 }; 

Then, if you configure vertex attributes like this:

 glVertexAttribPointer(0, 1, GL_INT, GL_FALSE, 0, nullptr); 

You will get the wrong and weird results.


If you change this line in the vertex shader

 output = sqrt(input); 

in

 output = sqrt(intBitsToFloat(input)); 

OR

change this line in C ++ code:

 glVertexAttribPointer(0, 1, GL_FLOAT, GL_FALSE, 0, nullptr); ^^^^^^^^ does not match the real input type but stops glVertexAttribPointer() converting them 

This will work. But this is not a natural way.


Now glVertexAttribIPointer() comes to glVertexAttribIPointer() :

 --- glVertexAttribPointer(0, 1, GL_INT, GL_FALSE, 0, nullptr); +++ glVertexAttribIPointer(0, 1, GL_INT, 0, nullptr); 

Then you will get the right results.

(I fought for this all day until I found glVertexAttribIPointer() .)

0
source share

All Articles