I have CMSampleBufferRef (s) which I decode using VTDecompressionSessionDecodeFrame, which results in CVImageBufferRef after frame decoding is complete, so my questions ...
What would be the most efficient way to display these CVImageBufferRefs in a UIView?
I managed to convert CVImageBufferRef to CGImageRef and display them according to CGImageRef settings as CALayer content, but then DecompressionSession was configured using @ {(id) kCVPixelBufferPixelFormatTypeKey: [NSNumber numberWithInt: kCVPixelFormatype};
Here is an example / code of how I converted CVImageBufferRef to CGImageRef (note: cvpixelbuffer data must be in 32BGRA format for this to work)
CVPixelBufferLockBaseAddress(cvImageBuffer,0);
The CEAN # WWDC14 513 ( https://developer.apple.com/videos/wwdc/2014/#513 ) avoids that YUV → RGB color conversion (using CPU?) Can be avoided, and if YUV is able to use GLES magic - I wonder what it can be and how it can be done?
Apple's iOS SampleCode GLCameraRipple shows an example of a YUV CVPixelBufferRef displayed from a camera using 2 OpenGLES with separate textures for Y and UV components, as well as a fragment shader program that does YUV to RGB color conversion calculations using the GPU - all that really required, or is there an even easier way, how can this be done?
NOTE. In my use case, I cannot use AVSampleBufferDisplayLayer, due to the way access to the decompression is available.
source share