Take the test and you will understand the difference.
Suppose you are performing a feedback transformation using the following vertex shader:
#version 450 core layout(location = 0) in int input; layout(xfb_offset = 0) out float output; void main() { output = sqrt(input); }
And this is your "vertex data":
GLint data[] = { 1, 2, 3, 4, 5 };
Then, if you configure vertex attributes like this:
glVertexAttribPointer(0, 1, GL_INT, GL_FALSE, 0, nullptr);
You will get the wrong and weird results.
If you change this line in the vertex shader
output = sqrt(input);
in
output = sqrt(intBitsToFloat(input));
OR
change this line in C ++ code:
glVertexAttribPointer(0, 1, GL_FLOAT, GL_FALSE, 0, nullptr); ^^^^^^^^ does not match the real input type but stops glVertexAttribPointer() converting them
This will work. But this is not a natural way.
Now glVertexAttribIPointer() comes to glVertexAttribIPointer() :
--- glVertexAttribPointer(0, 1, GL_INT, GL_FALSE, 0, nullptr); +++ glVertexAttribIPointer(0, 1, GL_INT, 0, nullptr);
Then you will get the right results.
(I fought for this all day until I found glVertexAttribIPointer() .)