I would like to know if anyone knows how to perform cross-correlation between two audio signals on iOS .
I would like to align the FFT windows that I receive in the receiver (I receive the signal from the microphone) with those in the transmitter (which plays the soundtrack), i.e. make sure that the first sample of each window (except for the “synchronization” period) in the transmitter will also be the first window in the receiver.
I put a known waveform (in the frequency domain) into each piece of the transmitted audio. I want to estimate the delay through cross-correlation between the known waveform and the received signal (over several consecutive pieces), but I do not know how to do this.
There seems to be a vDSP_convD method for this, but I have no idea how to use it and whether I must first run the real FFT of the samples (probably yes, because I need to pass double []).
void vDSP_convD ( const double __vDSP_signal[], vDSP_Stride __vDSP_signalStride, const double __vDSP_filter[], vDSP_Stride __vDSP_strideFilter, double __vDSP_result[], vDSP_Stride __vDSP_strideResult, vDSP_Length __vDSP_lenResult, vDSP_Length __vDSP_lenFilter )
Kal
source share