Plural scaling

I have an array consisting of N 3x3 arrays (a set of matrices, although the data type is np.ndarray), and I have an array consisting of N 3x1 arrays (a set of vectors). What I want to do is multiply each matrix by each vector, so I expect to get N 3x1 arrays.

A simple example:

A = np.ones((6,3,3))
B = np.ones((6,3,1))
np.dot(A,B) # This gives me a 6x3x6x1 array, which is not what I want
np.array(map(np.dot,A,B)) # This gives me exactly what I want, but I don't want to have to rely on map

I’m tired of all kinds of adjustment, researched einsum, etc., but I can’t get it to work the way I want it to. How do I get this to work with multiple broadcasts? This operation, ultimately, must be performed many thousands of times, and I do not want the operations mapor list to be understood in order to slow down the work.

+4
source share
2

np.einsum :

np.einsum('ijk,ikl->ijl', A, B)
+2
A = np.random.rand(6, 3, 3)
B = np.random.rand(6, 3, 1)
C = np.array(map(np.dot, A, B))
D = np.sum(A*B.swapaxes(1, 2), axis=2)[..., None]
assert np.allclose(C, D)
assert C.shape == D.shape == (6, 3, 1)

"allclose" - , 1-16.

.sapapaxis [..., None] - , . :

A = np.random.rand(6, 3, 3)
B = np.random.rand(6, 3)
C = np.array(map(np.dot, A, B))
D = np.sum(A*B[:, None, :], axis=2)
assert np.allclose(C, D)
assert C.shape == D.shape == (6, 3)
0

All Articles