Effective matrix calculation algorithms during its transposition

For the class, the question that was asked by my teacher is the algorithmic cost of multiplying the matrix by the time it is transposed. With the standard algorithm for multiplying by a matrix of 3 cycles, the efficiency is O (N ^ 3), and I wonder if there is a way to manipulate or use the transpose matrix matrix to get a faster algorithm. I understand that when you multiply a matrix by its transposition, you have to calculate less of the matrix because it is symmetric, but I can’t figure out how to manipulate an algorithm that can take less than O (n ^ 3).

I know algorithms like Coppensmith and Straussen, which are faster general matrix multiplication algorithms, but can anyone give any hints or understand how to take advantage of transposition in a computational way?

thanks

+8
algorithm complexity-theory matrix-multiplication algebra linear
source share
1 answer

Currently, there are no asymptotic barrier properties of this particular multiplication.

The obvious optimization is to use product symmetry. That is, the entry [i][j] th is equal to the entry [j][i] th.

For implementation-oriented optimization, there is a significant amount of caching that you can do. A very significant amount of time when multiplying large matrices is spent on transferring data to and from the CPU and back. Thus, the processor developers implemented an intelligent caching system in which recently used memory is stored in a small section of memory called the cache. In addition to this, they also made nearby cached memory. This is due to the fact that most of the IO memory is associated with reading / writing from / to arrays, which are stored sequentially.

Since transposing a matrix is ​​just the same matrix with changing indexes, caching a value in a matrix can have more than twice as much impact.

+1
source share

All Articles