What is the most efficient way to display CVImageBufferRef on iOS

I have CMSampleBufferRef (s) which I decode using VTDecompressionSessionDecodeFrame, which results in CVImageBufferRef after frame decoding is complete, so my questions ...

What would be the most efficient way to display these CVImageBufferRefs in a UIView?

I managed to convert CVImageBufferRef to CGImageRef and display them according to CGImageRef settings as CALayer content, but then DecompressionSession was configured using @ {(id) kCVPixelBufferPixelFormatTypeKey: [NSNumber numberWithInt: kCVPixelFormatype};

Here is an example / code of how I converted CVImageBufferRef to CGImageRef (note: cvpixelbuffer data must be in 32BGRA format for this to work)

CVPixelBufferLockBaseAddress(cvImageBuffer,0); // get image properties uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(cvImageBuffer); size_t bytesPerRow = CVPixelBufferGetBytesPerRow(cvImageBuffer); size_t width = CVPixelBufferGetWidth(cvImageBuffer); size_t height = CVPixelBufferGetHeight(cvImageBuffer); /*Create a CGImageRef from the CVImageBufferRef*/ CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef cgContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); CGImageRef cgImage = CGBitmapContextCreateImage(cgContext); // release context and colorspace CGContextRelease(cgContext); CGColorSpaceRelease(colorSpace); // now CGImageRef can be displayed either by setting CALayer content // or by creating a [UIImage withCGImage:geImage] that can be displayed on // UIImageView ... 

The CEAN # WWDC14 513 ( https://developer.apple.com/videos/wwdc/2014/#513 ) avoids that YUV → RGB color conversion (using CPU?) Can be avoided, and if YUV is able to use GLES magic - I wonder what it can be and how it can be done?

Apple's iOS SampleCode GLCameraRipple shows an example of a YUV CVPixelBufferRef displayed from a camera using 2 OpenGLES with separate textures for Y and UV components, as well as a fragment shader program that does YUV to RGB color conversion calculations using the GPU - all that really required, or is there an even easier way, how can this be done?

NOTE. In my use case, I cannot use AVSampleBufferDisplayLayer, due to the way access to the decompression is available.

+7
source share
2 answers

If you get CVImageBufferRef from CMSampleBufferRef , which you get from captureOutput:didOutputSampleBuffer:fromConnection: you don't need to do this conversion and you can directly get imageData from CMSampleBufferRef . Here is the code:

 NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:sampleBuffer]; UIImage *frameImage = [UIImage imageWithData:imageData]; 

The API description does not provide any information about whether 32BGRA supports it or not, and creates imageData along with any jpeg metadata without compression. If your goal is to display an image on the screen or use it with UIImageView , this is a quick way.

0
source

Update: the original answer below does not work because kCVPixelBufferIOSurfaceCoreAnimationCompatibilityKey not available for iOS.


UIView backed up by CALayer , whose contents property supports several types of images. As detailed in my answer to a similar question for macOS, you can use CALayer to render a CVPixelBuffer supports IOSurface . (Caution: I tested this only on macOS.)

0
source

All Articles