I am developing a music visualization application for the iPhone.
I thought about this, picking up data through the iPhone microphone, running the Fourier transform on it, and then creating a visualization.
The best example I could get is aurioTuch , which creates a perfect graph based on FFT data. However, I struggled to understand / repeat aurioTouch in my own project.
I canβt understand where exactly does aurioTouch take data from the microphone before it does FFT?
Also, are there other code examples that I could use for this in my project? Or any other tips?
source
share