First, I will answer your second question - do not wait until the application crashes, you can stop pulling audio from the track by checking the number of samples that are available in the CMSampleBufferRef that you are reading; for example (this code will also be included in the second half of my answer):
CMSampleBufferRef sample; sample = [readerOutput copyNextSampleBuffer]; CMItemCount numSamples = CMSampleBufferGetNumSamples(sample); if (!sample || (numSamples == 0)) {
As for your first question, it depends on the type of audio you capture - it can be a fading PCM format (no compression) or VBR (compressed). I'm not even going to worry about the PCM part, because it is simply not wise to send uncompressed audio data from one phone to another over the network - it is unreasonably expensive and will clog your network bandwidth. So we are left with VBR data. To do this, you need to send the contents of AudioBuffer and AudioStreamPacketDescription , which you pulled from the sample. But then again, it is probably best to explain what I am saying by code:
-(void)broadcastSample { [broadcastLock lock]; CMSampleBufferRef sample; sample = [readerOutput copyNextSampleBuffer]; CMItemCount numSamples = CMSampleBufferGetNumSamples(sample); if (!sample || (numSamples == 0)) { Packet *packet = [Packet packetWithType:PacketTypeEndOfSong]; packet.sendReliably = NO; [self sendPacketToAllClients:packet]; [sampleBroadcastTimer invalidate]; return; } NSLog(@"SERVER: going through sample loop"); Boolean isBufferDataReady = CMSampleBufferDataIsReady(sample); CMBlockBufferRef CMBuffer = CMSampleBufferGetDataBuffer( sample ); AudioBufferList audioBufferList; CheckError(CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer( sample, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, &CMBuffer ), "could not read sample data"); const AudioStreamPacketDescription * inPacketDescriptions; size_t packetDescriptionsSizeOut; size_t inNumberPackets; CheckError(CMSampleBufferGetAudioStreamPacketDescriptionsPtr(sample, &inPacketDescriptions, &packetDescriptionsSizeOut), "could not read sample packet descriptions"); inNumberPackets = packetDescriptionsSizeOut/sizeof(AudioStreamPacketDescription); AudioBuffer audioBuffer = audioBufferList.mBuffers[0]; for (int i = 0; i < inNumberPackets; ++i) { NSLog(@"going through packets loop"); SInt64 dataOffset = inPacketDescriptions[i].mStartOffset; UInt32 dataSize = inPacketDescriptions[i].mDataByteSize; size_t packetSpaceRemaining = MAX_PACKET_SIZE - packetBytesFilled - packetDescriptionsBytesFilled; size_t packetDescrSpaceRemaining = MAX_PACKET_DESCRIPTIONS_SIZE - packetDescriptionsBytesFilled; if ((packetSpaceRemaining < (dataSize + AUDIO_STREAM_PACK_DESC_SIZE)) || (packetDescrSpaceRemaining < AUDIO_STREAM_PACK_DESC_SIZE)) { if (![self encapsulateAndShipPacket:packet packetDescriptions:packetDescriptions packetID:assetOnAirID]) break; } memcpy((char*)packet + packetBytesFilled, (const char*)(audioBuffer.mData + dataOffset), dataSize); memcpy((char*)packetDescriptions + packetDescriptionsBytesFilled, [self encapsulatePacketDescription:inPacketDescriptions[i] mStartOffset:packetBytesFilled ], AUDIO_STREAM_PACK_DESC_SIZE); packetBytesFilled += dataSize; packetDescriptionsBytesFilled += AUDIO_STREAM_PACK_DESC_SIZE;
Some of the methods that I used in the above code are methods that you donβt need to worry about, for example, adding headers to each package (I created my own protocol, you can create your own). See this tutorial for more information.
- (BOOL)encapsulateAndShipPacket:(void *)source packetDescriptions:(void *)packetDescriptions packetID:(NSString *)packetID { // package Packet char * headerPacket = (char *)malloc(MAX_PACKET_SIZE + AUDIO_BUFFER_PACKET_HEADER_SIZE + packetDescriptionsBytesFilled); appendInt32(headerPacket, 'SNAP', 0); appendInt32(headerPacket,packetNumber, 4); appendInt16(headerPacket,PacketTypeAudioBuffer, 8); // we use this so that we can add int32s later UInt16 filler = 0x00; appendInt16(headerPacket,filler, 10); appendInt32(headerPacket, packetBytesFilled, 12); appendInt32(headerPacket, packetDescriptionsBytesFilled, 16); appendUTF8String(headerPacket, [packetID UTF8String], 20); int offset = AUDIO_BUFFER_PACKET_HEADER_SIZE; memcpy((char *)(headerPacket + offset), (char *)source, packetBytesFilled); offset += packetBytesFilled; memcpy((char *)(headerPacket + offset), (char *)packetDescriptions, packetDescriptionsBytesFilled); NSData *completePacket = [NSData dataWithBytes:headerPacket length: AUDIO_BUFFER_PACKET_HEADER_SIZE + packetBytesFilled + packetDescriptionsBytesFilled]; NSLog(@"sending packet number %lu to all peers", packetNumber); NSError *error; if (![_session sendDataToAllPeers:completePacket withDataMode:GKSendDataReliable error:&error]) { NSLog(@"Error sending data to clients: %@", error); } Packet *packet = [Packet packetWithData:completePacket]; // reset packet packetBytesFilled = 0; packetDescriptionsBytesFilled = 0; packetNumber++; free(headerPacket); // free(packet); free(packetDescriptions); return YES; } - (char *)encapsulatePacketDescription:(AudioStreamPacketDescription)inPacketDescription mStartOffset:(SInt64)mStartOffset { // take out 32bytes b/c for mStartOffset we are using a 32 bit integer, not 64 char * packetDescription = (char *)malloc(AUDIO_STREAM_PACK_DESC_SIZE); appendInt32(packetDescription, (UInt32)mStartOffset, 0); appendInt32(packetDescription, inPacketDescription.mVariableFramesInPacket, 4); appendInt32(packetDescription, inPacketDescription.mDataByteSize,8); return packetDescription; }
receiving data:
- (void)receiveData:(NSData *)data fromPeer:(NSString *)peerID inSession:(GKSession *)session context:(void *)context { Packet *packet = [Packet packetWithData:data]; if (packet == nil) { NSLog(@"Invalid packet: %@", data); return; } Player *player = [self playerWithPeerID:peerID]; if (player != nil) { player.receivedResponse = YES;
Notes:
There is a lot of network data that I did not consider here (i.e., in the receiving part of the data. I used a lot of custom objects without expanding their definition). I did not do this because an explanation of all this goes beyond just one answer to SO. However, you can follow the excellent tutorial from Ray Wenderlich. He is in no hurry to explain the network principles, and the architecture that I use above is almost completed verbatim. HOW TO BE SURE ONE (see Next point)
Depending on your project, GKSession may not be suitable (especially if your project is in real time, or if you need more than 2-3 devices to connect at the same time), it has a lot of limitations . You will have to dig deeper and use Bonjour instead. Good designs iPhone has a nice quick chapter that gives a good example of using Bonjour services. This is not as scary as it sounds (and the documentation for the apple seems to be authoritative in this matter).
I noticed that you are using GCD for multithreading. Again, if you are dealing with real time, you do not want to use the extended frameworks that make heavy lifting for you (GCD is one of them). Read more about this great article in more detail. Also read the lengthy discussion between me and justin in the comments of this answer.
You can check out the MTAudioProcessingTap introduced in iOS 6. This can potentially save you the hassle of working with AVAssets. However, I have not tested this material. He came out after I did all my work.
Last but not least, you can check out the training core audio . This is a widely recognized reference to this topic. I remember how you got stuck the way you were when you asked the question. Core audio is hard work and it takes time to drown. SO will only give you pointers. You will have to take some time to absorb the material yourself, then you will understand how everything goes. Good luck