Wrong order of matrix values โ€‹โ€‹in glm?

I started using the GLM library to perform mathematical operations on OpenGL 3 and GLSL. I need a spelling projection for drawing 2D graphics, so I wrote this simple code:

glm::mat4 projection(1.0); projection = glm::ortho( 0.0f, 640.0f, 480.0f, 0.0f, 0.0f, 500.0f); 

Printing on screen the values โ€‹โ€‹created by glm :: ortho, I get:

  0.00313 0.00000 0.00000 0.00000 0.00000 -0.00417 0.00000 0.00000 0.00000 0.00000 -0.00200 0.00000 -1.00000 1.00000 -1.00000 1.00000 

As I know, this is not the correct order for values โ€‹โ€‹in OpenGL, because multiplying this matrix by a position vector will ignore all translation values.

I tested this matrix with my shader and some primitives and I get a blank screen. But if I manually modify the matrix as follows, it works fine:

  0.00313 0.00000 0.00000 -1.00000 0.00000 -0.00417 0.00000 1.00000 0.00000 0.00000 -0.00200 -1.00000 0.00000 0.00000 0.00000 1.00000 

Also, looking at the ortho function in the file glm / gtc / matrix_transform.inl:

 template <typename valType> inline detail::tmat4x4<valType> ortho( valType const & left, valType const & right, valType const & bottom, valType const & top, valType const & zNear, valType const & zFar) { detail::tmat4x4<valType> Result(1); Result[0][0] = valType(2) / (right - left); Result[1][1] = valType(2) / (top - bottom); Result[2][2] = - valType(2) / (zFar - zNear); Result[3][0] = - (right + left) / (right - left); Result[3][1] = - (top + bottom) / (top - bottom); Result[3][2] = - (zFar + zNear) / (zFar - zNear); return Result; } 

I replaced the last 3 lines of initialization with the following code and also worked fine:

  Result[0][3] = - (right + left) / (right - left); Result[1][3] = - (top + bottom) / (top - bottom); Result[2][3] = - (zFar + zNear) / (zFar - zNear); 

This is the minimum vertex shader that I use for the test (note that at this moment uni_MVP is only the projection matrix described above):

 uniform mat4 uni_MVP; in vec2 in_Position; void main(void) { gl_Position = uni_MVP * vec4(in_Position.xy,0.0, 1.0); } 

I say that this is not a mistake, because all functions work the same way. Maybe the problem is my C ++ compiler, which inverts the order of multidimensional arrays? How can I solve this problem without changing the whole GLM source code?

I am using the latest GLM library (0.9.1) using Code :: Blocks and MinGW running on Windows Vista.

+8
matrix opengl glsl glm-math
source share
1 answer

At first it is called transposition, not inversion. Inversion means something completely different. Secondly, thatโ€™s how it should be. OpenGL refers to matrices in the main column order, i.e. Matrix elements must have the following indices:

  0 4 8 12 1 5 9 13 2 6 10 14 3 7 11 15 

However, your usual multidimensional C / C ++ arrays are usually such as:

  0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 

i.e. row and column indices are transposed. In older versions of OpenGL, some extension was implemented that allows you to supply matrices in transposed form so that people do not rewrite their code. It was called GL_ARB_transpose_matrix http://www.opengl.org/registry/specs/ARB/transpose_matrix.txt

With shaders, this is even easier than using new features. glUniformMatrix has a GLboolean transpose parameter, you have 3 guesses about what it does.

+22
source share

All Articles