GLM matrix multiplication and OpenGL GLSL

I have similar code like in this question: some opengl and glm descriptions

I have a combined matrix that I pass as a single unified

//C++ mat4 combinedMatrix = projection * view * model; //GLSL doesn't work out_position = combinedMatrix * vec4(vertex, 1.0); 

This does not work. But if I do all the multiplication in the shader, so I pass each individual matrix and get

 //GLSL works out_position = projection * view * model * vec4(vertex, 1.0); 

It works. I see nothing wrong with my C ++ code matrices.

Next works too

 //C++ mat4 combinedMatrix = projection * view * model; vec4 p = combinedMatrix * v; //pass in vertex p as a vec4 //GLSL works out_position = vertex 
+1
matrix opengl glsl glm-math
source share
1 answer

I think the problem may lie in the matrix multiplication that you do in your code.

How is the next multiplication performed?

mat4 combined Matrix = projection * view * model

It seems rather strange to me, matrix multiplication cannot be done this way if I am not mistaken.

So I execute it:

 for (i=0; i<4; i++) { tmp.m[i][0] = (srcA->m[i][0] * srcB->m[0][0]) + (srcA->m[i][1] * srcB->m[1][0]) + (srcA->m[i][2] * srcB->m[2][0]) + (srcA->m[i][3] * srcB->m[3][0]) ; tmp.m[i][1] = (srcA->m[i][0] * srcB->m[0][1]) + (srcA->m[i][1] * srcB->m[1][1]) + (srcA->m[i][2] * srcB->m[2][1]) + (srcA->m[i][3] * srcB->m[3][1]) ; tmp.m[i][2] = (srcA->m[i][0] * srcB->m[0][2]) + (srcA->m[i][1] * srcB->m[1][2]) + (srcA->m[i][2] * srcB->m[2][2]) + (srcA->m[i][3] * srcB->m[3][2]) ; tmp.m[i][3] = (srcA->m[i][0] * srcB->m[0][3]) + (srcA->m[i][1] * srcB->m[1][3]) + (srcA->m[i][2] * srcB->m[2][3]) + (srcA->m[i][3] * srcB->m[3][3]) ; } memcpy(result, &tmp, sizeof(PATRIA_Matrix)); 

Maybe I'm wrong about this, but I'm absolutely sure that you should follow this WAY.

As I see your example, it seems to me that the pointer multiplication :( (although I do not have the specifics of your mat4 matrix class / struct).

0
source share

All Articles