After a discussion with @Divakar, find a comparison of the various convolution methods present in scipy:
import numpy as np from scipy import signal, ndimage def conv2(A, size): return signal.convolve2d(A, np.ones((size, size)), mode='same') / float(size**2) def fftconv(A, size): return signal.fftconvolve(A, np.ones((size, size)), mode='same') / float(size**2) def uniform(A, size): return ndimage.uniform_filter(A, size, mode='constant')
All 3 methods return exactly the same value. However, note that uniform_filter has the parameter mode='constant' , which indicates the boundary conditions of the filter, and constant == 0 is the same boundary condition according to which the Fourier domain is used (in the other two methods). For different use cases, you can change the boundary conditions.
Now some test matrices:
A = np.random.randn(1000, 1000)
And some timings:
%timeit conv2(A, 3)
In short, uniform_filter seems to be faster, and this is because the convolution is separable in two one-dimensional convolutions (similar to gaussian_filter , which is also separable).
Other inseparable filters with different cores are more likely to use the signal module (the one used by @Divakar) more quickly.
The speed of both fftconvolve and uniform_filter remains constant for different kernel sizes, and convolve2d gets a little slower.
source share