The derivative of a sinusoid, or complex exponential, is directly proportional to its frequency, and the phase is shifted by π/2 . For a complex exponent, the phase shift is equivalent to multiplying by j . For example, d/dt exp(j*Ω*t) == j*Ω * exp(j*Ω*t) == Ω * exp(j*π/2) * exp(j*Ω*t) == Ω * exp(j*(Ω*t + π/2)) . So, if you have a couple of Fourier transforms u(t) <=> U(Ω) , then du/dt <=> jΩ * U(Ω) . Integration is largely the reverse operation, but it can add momentum if there is a DC component: ∫udt <=> U(Ω) / jΩ + π*U(0)*δ(Ω)
To approximate the derivative using a sample sequence, you need to scale the angular ω discrete-time frequency (which ranges from 0 to 2π or -π to π) with the sampling frequency fs . For example, let's say the discrete time frequency is π / 2 radians / sample, for example, the sequence [1, 0, -1, 0, ...] . In the original signal, this corresponds to fs/4 . Derivative d/dt cos(2*π*fs/4*t) == d/dt cos(π/2*fs*t) == -π/2*fs*sin(π/2*fs*t) == π/2*fs*cos(π/2*fs*t + π/2) .
You need to try on fs , which is more than double the signal bandwidth. The selected component exactly fs/2 is unreliable. In particular, with only two samples per cycle, the amplitude of the fs/2 component alternates the sign of the first sample in the sequence. Thus, for a real-valued signal, the component fs/2 DFT is real, with phase 0 or π radians, i.e. a*cos(π*fs*t + {0, π}) . Given the latter, the derivative of fs/2 is -a*π*fs*cos(π*fs*t + {π/2, 3*π/2}) , which is 0 for the sampling times t == n/fs .
(As for the latter, the standard trigonometric DFT interpolation uses cosine, in which case the derivative will be zero at the sampling points. This is not necessarily true if you tried the signal and its derivative at the same time. Loses the phase of the fs/2 component with respect to the signal, but doesn’t belong to its derivative. Depending on the start time of the sample, both the fs/2 component and its derivative can be non-zero at the points of sampling. Fortune one of them is 0 during sampling, the other will be at its peak, since they π/2 radians apart from d friend.)
Given that the fs/2 DFT component is always real for a real-valued signal, when you multiply it by j when calculating the derivative or integral, this introduces a multi-valued component into the result, There’s just a workaround. If N even, just zero out the fs/2 component in the N/2 index. Another problem is dividing by zero when dividing by jΩ for integration. This can be solved by adding a small value to the index 0 of the vector Ω (for example, finfo(float64).tiny is the smallest double float with double precision).
Given Ω = fs * ω , the system indicated in the question has the following form in the frequency domain:
H(Ω) = 1 / (g + j*Ω*C) U(Ω) = I(Ω) * H(Ω)
This is a single pole low pass filter. The solution you received has 2 problems.
- You do not scale the frequency variable
w by fs . - You are using
fftfreq , which uses a range of -0.5 to 0.5. You need -π for π. In fact, you only need from 0 to π, because i(t) is real-valued. In this, you can use rfft and irfft for real signals that skips the calculation of negative frequencies.
All that said, you can still be disappointed with the result, because the DFT uses periodic expansion of your signal.
Examples
Firstly, here is a simple example of a 1 Hz sine wave (in red), taken at a resolution of 1024 samples / s for 2 seconds, and its derivative calculated through the DFT (printed in blue):
from pylab import * fs = 1024 t = arange(2*fs, dtype=float64) / fs N = len(t) // 2 + 1 # non-negative frequencies w = 2 * pi / len(t) * arange(N) Omega = w * fs x0 = cos(2*pi*t) # w0 == 2*pi/fs X0 = rfft(x0); # Zero the fs/2 component. It zero here anyway. X0[-1] = 0 dx0 = irfft(X0 * 1j*Omega) plot(t, x0, 'r', t, dx0, 'b') show()

This simple case is a periodic signal with a finite passband. Just remember to try an integer number of periods at a high enough speed to avoid smoothing.
The following example is a triangular wave with a slope of 1 and -1 and a gap in the derivative in the center and ribs. Ideally, the derivative should be a square wave, but computations that ideally require infinite bandwidth. Instead, Gibbs calls around the gap:
t2 = t[:fs] m = len(t) // (2*len(t2)) x1 = hstack([t2, 1.0 - t2] * m) X1 = rfft(x1) X1[-1] = 0 dx1 = irfft(X1 * 1j*Omega) plot(t, x1, 'r', t, dx1, 'b') show()

Implicit periodic extension of DFT is problematic if you are solving a non-periodic system. Here is the solution of the system under consideration using both odeint and DFT ( tau set to 0.5 s, g and C set for the angular frequency of 1 Hz):
from scipy.integrate import odeint a = 1.0; tau = 0.5 g = 1.0; C = 1.0 / (2 * pi) i = lambda t: a * t / tau * exp(1 - t / tau) f = lambda u, t: (-g*u + i(t)) / C H = 1 / (g + 1j*Omega*C) # system function I = rfft(i(t)) I[-1] = 0 U_DFT = I * H u_dft = irfft(U_DFT) u_ode = odeint(f, u_dft[0], t)[:,0] td = t[:-1] + 0.5/fs subplot('221'); title('odeint u(t)'); plot(t, u_ode) subplot('222'); title('DFT u(t)'); plot(t, u_dft) subplot('223'); title('odeint du/dt') plot(td, diff(u_ode)*fs, 'r', t, (-g*u_ode + i(t)) / C, 'b') subplot('224'); title('DFT du/dt') plot(td, diff(u_dft)*fs, 'r', t, (-g*u_dft + i(t)) / C, 'b') show()

The du/dt graphs impose the derivative of the diff estimate (red) compared to the calculated value from the differential equation (blue). In both cases, they are approximately equal. I set the initial value for odeint to u_dft[0] to show that it returns a DFT solution for the same initial value. The difference is that the odeint solution will continue to decay to zero, but the DFT solution is periodic with a period of 2s. The DFT result will look better in this case, if more I (t) will be selected, since I (t) starts from zero and decays to zero.
Zero padding is used with DFT to perform linear convolution. In particular, in this case, zero filling of the input signal will help to separate the transient from the response of the zero state from its steady state. However, most often the Laplace or z-transform functions are used to analyze ZSR / ZIR. scipy.signal has tools for analyzing LTI systems, including the mapping of continuous time to discrete time in polynomial, zero pole, and state — spatial forms.
John P. Boyd discusses the Chebyshev approximation method for non-periodic functions in the Chebyshev and Fourier spectral methods (free online at his University of Michigan).
You will probably get additional help with the question, for example, if you ask Signal Processing or Mathematics File Sharing.