IOS: RemoteIO AudioBuffer audio device (i.e. sound effects from a microphone)

I played with the Apple aurioTouch demo, which is the sample code for their Audio Unit tutorial. This application allows simultaneous microphone input / output. to the speaker. It also displays a stereo image of the inputted sound from the microphone.

At a really high level of this low-level process, the code sample is determined by the AudioComponent (in this case, RemoteIO, which allows input / output at the same time), and there is a visualization callback for this Audio Unit. In the callback, they perform some sound filtering (DC suppression filter) and stereogram visualization based on the AudioBuffer sound data from the microphone.

My ultimate goal is to create my own custom Audio Unit distortion based on microphone input. I think the right way to do this based on the Audio Unit tutorial is to make a second audio node and connect them to the audio processing graph. However, I read that iOS does not allow you to register your own audio devices. My questions:

  • Can I do direct manipulations with the AudioBufferList, which I have access to in the render callback from the RemoteIO Audio Unit (since they already seem to do this and apply a sound filter on it) and create their own sound distortions there?
  • I tried assigning the data to an AudioBufferList constant (the value I saw when it was held back from selectively starting and writing to the AudioBufferList), but it seems to do nothing.
+5
2

, iPhone.

: . ioUnit, .

, ( # 0) , , , , . , IO, AudioSession.

- AudioBufferList, , AudioUnitRender ( # 1). Render(), . , AurioTouch , .

, , . AudioBufferList , databuffer. AudioBufferList, , . . .

ioData, AudioBufferList, , Render(). , , , (.. IoData- > mBuffers [0].mData, Callback).

+1

All Articles