(Python) How to get a diagonal (A * B) without having to execute A * B?

Say we have two matrices A and B and let the matrix C be A*B (matrix multiplication is not an element). We want to get only the diagonal elements of C , which can be done via np.diagonal(C) . However, this is a waste of time because we multiply A by B, although we only need to multiply each row in A with column B , which has the same "id", which is row 1 from A with column 1 of row B , row 2 of A with column 2 of B , etc .: the multiplications that form the diagonal of C Is there a way to effectively achieve this with Numpy? I want to avoid using loops to control which row is multiplied by which column, instead I want to use the built-in numpy method, which performs this operation to optimize performance.

Thanks in advance.

+7
python numpy matrix
source share
2 answers

I could use einsum here:

 >>> a = np.random.randint(0, 10, (3,3)) >>> b = np.random.randint(0, 10, (3,3)) >>> a array([[9, 2, 8], [5, 4, 0], [8, 0, 6]]) >>> b array([[5, 5, 0], [3, 5, 5], [9, 4, 3]]) >>> a.dot(b) array([[123, 87, 34], [ 37, 45, 20], [ 94, 64, 18]]) >>> np.diagonal(a.dot(b)) array([123, 45, 18]) >>> np.einsum('ij,ji->i', a,b) array([123, 45, 18]) 

For large arrays, this will be much faster than direct multiplication:

 >>> a = np.random.randint(0, 10, (1000,1000)) >>> b = np.random.randint(0, 10, (1000,1000)) >>> %timeit np.diagonal(a.dot(b)) 1 loops, best of 3: 7.04 s per loop >>> %timeit np.einsum('ij,ji->i', a, b) 100 loops, best of 3: 7.49 ms per loop 

[Note: I originally made a version of elementwise, ii,ii->i instead of matrix multiplication. The same einsum tricks.]

+15
source share
 def diag(A,B): diags = [] for x in range(len(A)): diags.append(A[x][x] * B[x][x]) return diags 

I believe the code above is what you are looking for.

-one
source share

All Articles