I am doing a routine that measures the phase difference between two spectra using NumPy / Scipy.
I already had a program written in Matlab, so I basically reimplemented the function and the corresponding unit test with NumPy. However, I found that the unit test fails because scipy.fftpack.fft introduces some small numerical errors:
import numpy as np import scipy.fftpack.fft x = np.array([0.0, 1.0, 2.0, 3.0, 4.0, 3.0, 2.0, 1.0]) X = scipy.fftpack.fft(x)
In this case, since the time-domain signal is symmetrical, the expected output
[16.0000 -6.8284 0 -1.1716 0 -1.1716 0 -6.8284]
as shown in the following Matlab code:
>> x = [0.0, 1.0, 2.0, 3.0, 4.0, 3.0, 2.0, 1.0]; >> X = fft(x) X = 16.0000 -6.8284 0 -1.1716 0 -1.1716 0 -6.8284
As a result, there should be no imaginary components based on the DSP theory. However, the scipy result is as follows:
array([ 16.00000000 +0.00000000e+00j, -6.82842712 -2.22044605e-16j, 0.00000000 -0.00000000e+00j, -1.17157288 -2.22044605e-16j, 0.00000000 +0.00000000e+00j, -1.17157288 +2.22044605e-16j, 0.00000000 +0.00000000e+00j, -6.82842712 +2.22044605e-16j])
Why scipy.fftpack.fft introduce small imaginary components? I really want to avoid this problem. Can anyone give me a suggestion?