Python Scipy collapsed vs fftconvolve

I know, generally speaking, FFT and multiplication usually faster than the direct convolve operation when the array is relatively large. However, I’m cranking a very long signal (say, 10 million points) with a very short answer (say, 1 thousand Points). In this case, fftconvolve does not seem to make much sense, since it forces the FFT of the second array to the same size of the first array. Is direct coagulation possible in this case?

+8
python scipy fft convolution
source share
3 answers

Take a look at the comparison I made here:

http://scipy-cookbook.readthedocs.io/items/ApplyFIRFilter.html

Your case may be close to switching between using simple convolution and using FFT-based convolution, so your best bet (as @Dougal suggests in the comment) is time yourself.

(Note that in this comparison, I did not perform overlap-add or overlap-save.)

+6
source share

Thank you for your help. Now I did the test myself, I minimized with two arrays of sizes 2 ^ 20 and 2 ^ 4, and this is the result:

 numpy.convolve: 110 ms scipy.signal.convolve: 1.0 s scipy.signal.fftconvolve: 2.5 s 

So we have a winner, numpy convolve is much faster than others. I still don't know why.


Now I tried 2 longer arrays of size 2 ^ 22 and 2 ^ 10. Result:

 numpy.convolve: 6.7 s scipy.signal.convolve: 221 s scipy.signal.fftconvolve: MemoryError 

The difference is only increasing.

+3
source share

Fast FFT convolution using overlap or overlap saving algorithms can be performed in limited memory using FFT, which is only a small multiple (e.g., 2X) larger than the impulse response. It splits the long FFT into correctly overlapping short FFTs, but with zero padding FFT.

Even with overlapping overhead, O (NlogN) will beat M * N with efficiency for large enough N and M.

+2
source share

All Articles