I started using the GLM library to perform mathematical operations on OpenGL 3 and GLSL. I need a spelling projection for drawing 2D graphics, so I wrote this simple code:
glm::mat4 projection(1.0); projection = glm::ortho( 0.0f, 640.0f, 480.0f, 0.0f, 0.0f, 500.0f);
Printing on screen the values โโcreated by glm :: ortho, I get:
0.00313 0.00000 0.00000 0.00000 0.00000 -0.00417 0.00000 0.00000 0.00000 0.00000 -0.00200 0.00000 -1.00000 1.00000 -1.00000 1.00000
As I know, this is not the correct order for values โโin OpenGL, because multiplying this matrix by a position vector will ignore all translation values.
I tested this matrix with my shader and some primitives and I get a blank screen. But if I manually modify the matrix as follows, it works fine:
0.00313 0.00000 0.00000 -1.00000 0.00000 -0.00417 0.00000 1.00000 0.00000 0.00000 -0.00200 -1.00000 0.00000 0.00000 0.00000 1.00000
Also, looking at the ortho function in the file glm / gtc / matrix_transform.inl:
template <typename valType> inline detail::tmat4x4<valType> ortho( valType const & left, valType const & right, valType const & bottom, valType const & top, valType const & zNear, valType const & zFar) { detail::tmat4x4<valType> Result(1); Result[0][0] = valType(2) / (right - left); Result[1][1] = valType(2) / (top - bottom); Result[2][2] = - valType(2) / (zFar - zNear); Result[3][0] = - (right + left) / (right - left); Result[3][1] = - (top + bottom) / (top - bottom); Result[3][2] = - (zFar + zNear) / (zFar - zNear); return Result; }
I replaced the last 3 lines of initialization with the following code and also worked fine:
Result[0][3] = - (right + left) / (right - left); Result[1][3] = - (top + bottom) / (top - bottom); Result[2][3] = - (zFar + zNear) / (zFar - zNear);
This is the minimum vertex shader that I use for the test (note that at this moment uni_MVP is only the projection matrix described above):
uniform mat4 uni_MVP; in vec2 in_Position; void main(void) { gl_Position = uni_MVP * vec4(in_Position.xy,0.0, 1.0); }
I say that this is not a mistake, because all functions work the same way. Maybe the problem is my C ++ compiler, which inverts the order of multidimensional arrays? How can I solve this problem without changing the whole GLM source code?
I am using the latest GLM library (0.9.1) using Code :: Blocks and MinGW running on Windows Vista.