Capturing and controlling an audio microphone using AVCaptureSession?

While there are many tutorials on using AVCaptureSession to capture camera data, I cannot find any information (even the Apple dev network itself) on how to properly process microphone data.

I have implemented AVCaptureAudioDataOutputSampleBufferDelegate and I get calls to my delegate, but I have no idea how the content of the CMSampleBufferRef that I receive is formatted. Is the contents of the buffer a single discrete sample? What are its properties? Where can I set these properties?

Video properties can be set using [AVCaptureVideoDataOutput setVideoSettings:], but there is no corresponding call for AVCaptureAudioDataOutput (no setAudioSettings or anything like that).

+5
source share
1 answer

They are formatted as LPCM! You can verify this by getting an AudioStreamBasicDescription like this:

CMFormatDescriptionRef formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer);
const AudioStreamBasicDescription *streamDescription = CMAudioFormatDescriptionGetStreamBasicDescription(formatDescription);

and then checking the mFormatId stream descriptions.

+1
source

All Articles