Vectorize this convolution loop loop more efficiently in numpy

I need to do many cycles of the following type

for i in range(len(a)): for j in range(i+1): c[i] += a[j]*b[ij] 

where a and b are short arrays (of the same size as between 10 and 50). This can be done effectively using convolution:

 import numpy as np np.convolve(a, b) 

However, this gives me a full convolution (i.e. the vector is too long compared to the for loop above). If I use the "same" option in convolve, I get the center part, but what I want is the first part. Of course, I can chop off what I do not need from the full vector, but I would like to get rid of unnecessary calculation time, if possible. Can anyone suggest a better loop vector?

+6
source share
2 answers

You can write a small C extension in Cython:

 # cython: boundscheck=False cimport numpy as np import numpy as np # zeros_like ctypedef np.float64_t np_t def convolve_cy_np(np.ndarray[np_t] a not None, np.ndarray[np_t] b not None, np.ndarray[np_t] c=None): if c is None: c = np.zeros_like(a) cdef Py_ssize_t i, j, n = c.shape[0] with nogil: for i in range(n): for j in range(i + 1): c[i] += a[j] * b[i - j] return c 

It works well for n=10..50 compared to np.convolve(a,b)[:len(a)] on my machine.

It also seems to work for numba .

+2
source

It is not possible to convolution using vectorized manipulations with arrays in numpy. It is best to use np.convolve (a, b, mode = 'same') and trim what you don't need. It will probably be 10 times faster than a double cycle in pure python, which you have higher. You can also throw your own with Cython if you are really concerned about speed, but it probably won't be that much if it will be faster than np.convolve ().

+2
source

Source: https://habr.com/ru/post/927335/


All Articles