I recently read the Matrix Tutorials with openGL and came across an optimized Matrix Multiplication method that I cannot understand.
typedef struct Matrix
{
float m[16];
} Matrix;
static const Matrix IDENTITY_MATRIX = { {
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1
} };
Matrix MultiplyMatrices(const Matrix* m1, const Matrix* m2)
{
Matrix out = IDENTITY_MATRIX;
unsigned int row, column, row_offset;
for (row = 0, row_offset = row * 4; row < 4; ++row, row_offset = row * 4)
for (column = 0; column < 4; ++column)
out.m[row_offset + column] =
(m1->m[row_offset + 0] * m2->m[column + 0]) +
(m1->m[row_offset + 1] * m2->m[column + 4]) +
(m1->m[row_offset + 2] * m2->m[column + 8]) +
(m1->m[row_offset + 3] * m2->m[column + 12]);
return out;
}
Here are the questions I have:
In the MultiplyMatrices method, why is there a pointer to the m1 and m2 parameters? If you just copy your values and return a new matrix, why use a pointer?
Why is the condition of the for loop identical to its increment?
for (row = 0, row_offset = row * 4 ; row <4; ++ row, row_offset = row * 4 )
source
share