Yes, this is the problem that Numba really works with. I changed the dk value because it was unreasonable for a simple demonstration. Here is the code:
import numpy as np import numba as nb def f_big(A, k, std_A, std_k, mean_A=10, mean_k=0.2, hh=100): return ( 1 / (std_A * std_k * 2 * np.pi) ) * A * (hh/50) ** k * np.exp( -1*(k - mean_k)**2 / (2 * std_k **2 ) - (A - mean_A)**2 / (2 * std_A**2)) def func(): outer_sum = 0 dk = 0.01
And then the timings:
In [7]: np.allclose(func(), func_nb()) Out[7]: True In [8]: %timeit func() 1 loops, best of 3: 222 ms per loop In [9]: %timeit func_nb() The slowest run took 419.10 times longer than the fastest. This could mean that an intermediate result is being cached 1000 loops, best of 3: 362 ยตs per loop
So the numba version on my laptop is about 600 times faster.
source share