Creating an iPhone Fourier Transform Music Video Game

I am developing a music visualization application for the iPhone.

I thought about this, picking up data through the iPhone microphone, running the Fourier transform on it, and then creating a visualization.

The best example I could get is aurioTuch , which creates a perfect graph based on FFT data. However, I struggled to understand / repeat aurioTouch in my own project.

I can’t understand where exactly does aurioTouch take data from the microphone before it does FFT?

Also, are there other code examples that I could use for this in my project? Or any other tips?

+5
source share
1 answer

Since I plan to use the microphone input, I thought your question is a good opportunity to familiarize yourself with the corresponding code example.

I will follow the steps of reading through the code:

  • Starting with SpectrumAnalysis.cpp(since it’s obvious that the sound should somehow transition to this class), you can see that the class method SpectrumAnalysisProcesshas a second input argument const int32_t* inTimeSig--- a promising start sounds because the input time signal is what we are looking for.
  • Find in project, , , , FFTBufferManager::ComputeFFT, mAudioBuffer (inTimeSig 1). 2 3 , / .. : mAudioBuffer memcopy, FFTBufferManager::GrabAudioData.
  • , , FFTBufferManager::GrabAudioData , , PerformThru. ioData ( ) AudioBufferList.
  • PerformThru, , : inputProc.inputProc = PerformThru; - : . inputProc, , AURenderCallbackStruct - . , .

, , AURenderCallbackStruct ( , Audio Unit Hosting), , , .

+3

All Articles