AVAssetReader for AudioQueueBuffer

I am currently doing a small test project to find out if I can get samples from AVAssetReader for playback using AudioQueue on iOS.

I read this: ( Play unprocessed uncompressed sound using AudioQueue, without sound ) and this: ( How to correctly read decoded PCM samples on iOS using AVAssetReader - currently incorrect decoding ),

Both that and another really helped. Before reading, I had no sound. Now, I get a sound, but the sound plays SUPER fast. This is my first foray into audio programming, so any help is greatly appreciated.

I initialize the reader this way:

NSDictionary * outputSettings = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey, [NSNumber numberWithFloat:44100.0], AVSampleRateKey, [NSNumber numberWithInt:2], AVNumberOfChannelsKey, [NSNumber numberWithInt:16], AVLinearPCMBitDepthKey, [NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved, [NSNumber numberWithBool:NO], AVLinearPCMIsFloatKey, [NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey, nil]; output = [[AVAssetReaderAudioMixOutput alloc] initWithAudioTracks:uasset.tracks audioSettings:outputSettings]; [reader addOutput:output]; ... 

And I take the data like this:

 CMSampleBufferRef ref= [output copyNextSampleBuffer]; // NSLog(@"%@",ref); if(ref==NULL) return; //copy data to file //read next one AudioBufferList audioBufferList; NSMutableData *data = [NSMutableData data]; CMBlockBufferRef blockBuffer; CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(ref, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer); // NSLog(@"%@",blockBuffer); if(blockBuffer==NULL) { [data release]; return; } if(&audioBufferList==NULL) { [data release]; return; } //stash data in same object for( int y=0; y<audioBufferList.mNumberBuffers; y++ ) { // NSData* throwData; AudioBuffer audioBuffer = audioBufferList.mBuffers[y]; [self.delegate streamer:self didGetAudioBuffer:audioBuffer]; /* Float32 *frame = (Float32*)audioBuffer.mData; throwData = [NSData dataWithBytes:audioBuffer.mData length:audioBuffer.mDataByteSize]; [self.delegate streamer:self didGetAudioBuffer:throwData]; [data appendBytes:audioBuffer.mData length:audioBuffer.mDataByteSize]; */ } 

which ultimately leads us to the audio queue configured in this way:

 //Apple own code for canonical PCM audioDesc.mSampleRate = 44100.0; audioDesc.mFormatID = kAudioFormatLinearPCM; audioDesc.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical; audioDesc.mBytesPerPacket = 2 * sizeof (AudioUnitSampleType); // 8 audioDesc.mFramesPerPacket = 1; audioDesc.mBytesPerFrame = 1 * sizeof (AudioUnitSampleType); // 8 audioDesc.mChannelsPerFrame = 2; audioDesc.mBitsPerChannel = 8 * sizeof (AudioUnitSampleType); // 32 err = AudioQueueNewOutput(&audioDesc, handler_OSStreamingAudio_queueOutput, self, NULL, NULL, 0, &audioQueue); if(err){ #pragma warning handle error //never errs, am using breakpoint to check return; } 

and we queue

 while (inNumberBytes) { size_t bufSpaceRemaining = kAQDefaultBufSize - bytesFilled; if (bufSpaceRemaining < inNumberBytes) { AudioQueueBufferRef fillBuf = audioQueueBuffer[fillBufferIndex]; fillBuf->mAudioDataByteSize = bytesFilled; err = AudioQueueEnqueueBuffer(audioQueue, fillBuf, 0, NULL); } bufSpaceRemaining = kAQDefaultBufSize - bytesFilled; size_t copySize; if (bufSpaceRemaining < inNumberBytes) { copySize = bufSpaceRemaining; } else { copySize = inNumberBytes; } if (bytesFilled > packetBufferSize) { return; } AudioQueueBufferRef fillBuf = audioQueueBuffer[fillBufferIndex]; memcpy((char*)fillBuf->mAudioData + bytesFilled, (const char*)(inInputData + offset), copySize); bytesFilled += copySize; packetsFilled = 0; inNumberBytes -= copySize; offset += copySize; } } 

I tried to use the code as fully as possible so that everyone could indicate where I was an idiot. At the same time, it seems to me that my problem arises either in the announcement of the output parameters of the track reader, or in the actual declaration of AudioQueue (where I describe the queue, what sound I'm going to send). The fact is, I really do not know mathematically how to actually generate these numbers (bytes per packet, frames per packet, what you have). An explanation for this would be very helpful, and thanks for the help in advance.

+2
source share
2 answers

For some reason, although every example I saw in an audio queue using LPCM had

 ASBD.mBitsPerChannel = 8* sizeof (AudioUnitSampleType); 

I think I need

 ASBD.mBitsPerChannel = 2*bytesPerSample; 

for description:

 ASBD.mFormatID = kAudioFormatLinearPCM; ASBD.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical; ASBD.mBytesPerPacket = bytesPerSample; ASBD.mBytesPerFrame = bytesPerSample; ASBD.mFramesPerPacket = 1; ASBD.mBitsPerChannel = 2*bytesPerSample; ASBD.mChannelsPerFrame = 2; ASBD.mSampleRate = 48000; 

I have no idea why this works, which really bothers me ... but I hope I can still understand.

If anyone can explain to me why this works, I would be very grateful.

0
source

Not sure how many answers this is, but there will be too much text and links for the comment, and hopefully this will help (maybe it will help you answer).

At first I know that my current project, which regulates the sampling rate, will affect the speed of sound, so you can try playing with these settings. But 44k is what I see in most cases, the default implementation, including the apple SpeakHere example. However, I would spend some time comparing your code with this example, because there are quite a few differences. as a check before launch.

First check this message fooobar.com/questions/697216 / ... It talks about how you need to know the audio format, in particular, how many bytes in the frame and the correct casting

good luck too. I had quite a few questions posted here on apple forums and the ios forum (rather than the official one). With very little answers / help. To get to where I am today (audio recording and streaming in the ole), I had to open an Apple Dev support ticket. Which, before resolving a sound that I never knew, exists (dev support). It’s good that if you have a valid dev account, you get 2 incidents for free! CoreAudio is not fun. The documentation is sparse, and in addition, SpeakHere there are not many examples. One thing I found is that the frame headers have some good information and this book . Unfortunately, I just started the book, otherwise I can help you further.

You can also check out some of my own posts that I tried to reply to as much as possible. This is my main question, which I spent a lot of time to compile all the relevant links and code.

using AQRecorder (an example of audio advertising) in the object class c

trying to use AVAssetWriter for external audio ( 2 )

+2
source

All Articles