AVCapture appendSampleBuffer

I'm going crazy about it - I looked everywhere and tried something and everything that I can think of.

We’ll make an iPhone app that uses AVFoundation — specifically, AVCapture — to capture video using an iPhone camera.

I need to have my own image, which is superimposed on the video stream included in the recording.

As long as I have an AVCapture session, you can display the channel, access the frame, save it as a UIImage and transfer the overlay image to it. Then convert this new UIImage to CVPixelBufferRef. annnd, to check that bufferRef is working, I converted it back to UIImage and it still displays the image completely.

The problem starts when I try to convert CVPixelBufferRef to CMSampleBufferRef to add it to the AVCaptureSessions resource. CMSampleBufferRef always returns NULL when I try to create it.

Here is the capture function (void) captureOutput

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { UIImage *botImage = [self imageFromSampleBuffer:sampleBuffer]; UIImage *wheel = [self imageFromView:wheelView]; UIImage *finalImage = [self overlaidImage:botImage :wheel]; //[previewImage setImage:finalImage]; <- works -- the image is being merged into one UIImage CVPixelBufferRef pixelBuffer = NULL; CGImageRef cgImage = CGImageCreateCopy(finalImage.CGImage); CFDataRef image = CGDataProviderCopyData(CGImageGetDataProvider(cgImage)); int status = CVPixelBufferCreateWithBytes(NULL, self.view.bounds.size.width, self.view.bounds.size.height, kCVPixelFormatType_32BGRA, (void*)CFDataGetBytePtr(image), CGImageGetBytesPerRow(cgImage), NULL, 0, NULL, &pixelBuffer); if(status == 0){ OSStatus result = 0; CMVideoFormatDescriptionRef videoInfo = NULL; result = CMVideoFormatDescriptionCreateForImageBuffer(NULL, pixelBuffer, &videoInfo); NSParameterAssert(result == 0 && videoInfo != NULL); CMSampleBufferRef myBuffer = NULL; result = CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixelBuffer, true, NULL, NULL, videoInfo, NULL, &myBuffer); NSParameterAssert(result == 0 && myBuffer != NULL);//always null :S NSLog(@"Trying to append"); if (!CMSampleBufferDataIsReady(myBuffer)){ NSLog(@"sampleBuffer data is not ready"); return; } if (![assetWriterInput isReadyForMoreMediaData]){ NSLog(@"Not ready for data :("); return; } if (![assetWriterInput appendSampleBuffer:myBuffer]){ NSLog(@"Failed to append pixel buffer"); } } } 

Another solution that I hear about all the time is to use AVAssetWriterInputPixelBufferAdaptor, which eliminates the need to make a messy wrapper CMSampleBufferRef. However, I have been browsing forums and developer docs by folded and apple developers and cannot find a clear description or an example of how to install this or how to use it. If anyone has a working example, you can show me or help me get rid of the aforementioned problem - we worked on this non-stop for a week, and I ended up.

Let me know if you need any other information.

Thanks in advance,

Michael

+7
iphone xcode avfoundation avcapturesession avcapture
source share
2 answers

You need AVAssetWriterInputPixelBufferAdaptor, here is the code to create it:

  // Create dictionary for pixel buffer adaptor NSDictionary *bufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey, nil]; // Create pixel buffer adaptor m_pixelsBufferAdaptor = [[AVAssetWriterInputPixelBufferAdaptor alloc] initWithAssetWriterInput:assetWriterInput sourcePixelBufferAttributes:bufferAttributes]; 

And the code to use it:

 // If ready to have more media data if (m_pixelsBufferAdaptor.assetWriterInput.readyForMoreMediaData) { // Create a pixel buffer CVPixelBufferRef pixelsBuffer = NULL; CVPixelBufferPoolCreatePixelBuffer(NULL, m_pixelsBufferAdaptor.pixelBufferPool, &pixelsBuffer); // Lock pixel buffer address CVPixelBufferLockBaseAddress(pixelsBuffer, 0); // Create your function to set your pixels data in the buffer (in your case, fill with your finalImage data) [self yourFunctionToPutDataInPixelBuffer:CVPixelBufferGetBaseAddress(pixelsBuffer)]; // Unlock pixel buffer address CVPixelBufferUnlockBaseAddress(pixelsBuffer, 0); // Append pixel buffer (calculate currentFrameTime with your needing, the most simplest way is to have a frame time starting at 0 and increment each time you write a frame with the time of a frame (inverse of your framerate)) [m_pixelsBufferAdaptor appendPixelBuffer:pixelsBuffer withPresentationTime:currentFrameTime]; // Release pixel buffer CVPixelBufferRelease(pixelsBuffer); } 

And don't forget to release your pixels BufferAdaptor.

+6
source share

I do this using CMSampleBufferCreateForImageBuffer ().

 OSStatus ret = 0; CMSampleBufferRef sample = NULL; CMVideoFormatDescriptionRef videoInfo = NULL; CMSampleTimingInfo timingInfo = kCMTimingInfoInvalid; timingInfo.presentationTimeStamp = pts; timingInfo.duration = duration; ret = CMVideoFormatDescriptionCreateForImageBuffer(NULL, pixel, &videoInfo); if (ret != 0) { NSLog(@"CMVideoFormatDescriptionCreateForImageBuffer failed! %d", (int)ret); goto done; } ret = CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixel, true, NULL, NULL, videoInfo, &timingInfo, &sample); if (ret != 0) { NSLog(@"CMSampleBufferCreateForImageBuffer failed! %d", (int)ret); goto done; } 
+1
source share

All Articles