Adding a sound buffer [from a file] to a live sound buffer [writing to a file]

What am I trying to do:

Recording up to a certain length of audio / video, as a result of which the predefined background music from the added external audio file will be added to the resulting output file - without further encoding / export after recording .

It’s as if you were recording videos using the iPhones Camera-app, and all the recorded videos in “Camera Roll” have background songs. No export or download after recording, not in a separate AudioTrack.


How am I trying to do this:

Using AVCaptureSession , in the delete method where the buffers pass ( CMSampleBufferRef ), I click them on AVAssetWriter to write to the file. Since I do not want to use multiple audio tracks in my output file, I can not transfer background sound through a separate AVAssetWriterInput , which means that I have to add background sound to each sample buffer from the record during recording to avoid merging / export after recording.

Background music is a specific, predefined audio file (format / codec: m4a aac), and it does not need to edit the time, just adding it to the entire recording, from beginning to end. Recording will never be longer than the background music file.

Before starting to write to a file, I also prepared AVAssetReader by reading the specified audio file.

Some pseudo codes (thread exclusion):

 -(void)startRecording { /* Initialize writer and reader here: [...] */ backgroundAudioTrackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack: backgroundAudioTrack outputSettings:nil]; if([backgroundAudioReader canAddOutput:backgroundAudioTrackOutput]) [backgroundAudioReader addOutput:backgroundAudioTrackOutput]; else NSLog(@"This doesn't happen"); [backgroundAudioReader startReading]; /* Some more code */ recording = YES; } - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { if(!recording) return; if(videoConnection) [self writeVideoSampleBuffer:sampleBuffer]; else if(audioConnection) [self writeAudioSampleBuffer:sampleBuffer]; } 

AVCaptureSession already broadcasting video-video and microphone-audio and is just waiting for BOOL recording to be installed on YES . This is not quite the way I do it, but a short, somehow equivalent representation. When the delegate method receives a CMSampleBufferRef type Audio, I call my own method writeAudioSamplebuffer:sampleBuffer . If this were done normally, without a background track, as I try to do, I would simply add something like this: [assetWriterAudioInput appendSampleBuffer:sampleBuffer]; instead of calling my method. In my case, however, I need to overlap two buffers before writing:

 -(void)writeAudioSamplebuffer:(CMSampleBufferRef)recordedSampleBuffer { CMSampleBufferRef backgroundSampleBuffer = [backgroundAudioTrackOutput copyNextSampleBuffer]; /* DO MAGIC HERE */ CMSampleBufferRef resultSampleBuffer = [self overlapBuffer:recordedSampleBuffer withBackgroundBuffer:backgroundSampleBuffer]; /* END MAGIC HERE */ [assetWriterAudioInput appendSampleBuffer:resultSampleBuffer]; } 

Problem:

I need to add incremental fetch buffers from a local file to the current buffers. The method that I created with the name overlapBuffer:withBackgroundBuffer: is not very much right now. I know how to extract AudioBufferList , AudioBuffer and mData etc. from CMSampleBufferRef , but I’m not sure how to add them at all - however - I couldn’t check different ways to do this, because there is a real problem before that. Before Magic happens, I have two CMSampleBufferRef s, one of which is obtained from the microphone, one of the files, and this is the problem:

The sample buffer received from the background music file is different from the one I get from the recording session. It seems like calling [self.backgroundAudioTrackOutput copyNextSampleBuffer]; receives a large number of samples. I understand that this may be obvious to some people, but I have never been at this level of media technology. Now I see that, if desired, we call copyNextSampleBuffer every time I get a sampleBuffer from the session, but I do not know when / where to put it.

As far as I can tell, a recording session gives one audio sample in each sample buffer, while a file reader gives several samples in each sample buffer. Can I somehow create a counter to count each sample / buffer I received, then use the first sampleBuffer file to extract each sample until the current sampleBuffer file has more samples to give, and then call [.. copyNext ..] and do the same with this buffer?

Since I completely control both recordings and file codecs, formats, etc., I hope that this solution will not spoil the "equalization" / synchronization of sound. Given that both samples have the same Rate, can this be a problem?


Note

I’m not even sure that this is possible, but I don’t see the immediate reason why this should not. It is also worth mentioning that when I try to use a Video file instead of an Audio file and try to constantly pull video sampleBuffers, they are perfectly aligned.

+7
ios objective-c avfoundation
source share
1 answer

I am not familiar with AVCaptureOutput since all my audio / music sessions were created using AudioToolbox instead of AVFoundation. However, I think you should be able to set the size of the write capture buffer. If not, and you still get only one sample, I would recommend that you store all the individual data received from the capture output in the buffer of the assistive devices. When the auxiliary buffer reaches the same size as the file read buffer, call [self overlapBuffer:auxiliarSampleBuffer withBackgroundBuffer:backgroundSampleBuffer];

Hope this helps you. If not, I can give an example on how to do this using CoreAudio. Using CoreAudio, I was able to get 1024 LCPM sample buffers from microphone capture and file reading. Thus, the overlap occurs immediately.

+4
source share

All Articles