IOS6: how to use YUV to RGB conversion function from cvPixelBufferref to CIImage

From iOS6, Apple provided the ability to use native YUV for CIImage through this call

initWithCVPixelBuffer: Parameters:

In the main image programming guide, they mentioned this feature

Take advantage of YUV image support in iOS 6.0 and later. Camera pixel buffers are a natural YUV, but most of the image processing algorithms expect RBGA data. There is a conversion cost between the two. Core Image supports reading YUB from CVPixelBuffer objects and applying the appropriate color conversion.

options = @ {(id) kCVPixelBufferPixelFormatTypeKey: @ (kCVPixelFormatType_420YpCvCr88iPlanarFullRange)};

But I can not use it correctly. I have raw YUV data. So this is what I did

void *YUV[3] = {data[0], data[1], data[2]}; size_t planeWidth[3] = {width, width/2, width/2}; size_t planeHeight[3] = {height, height/2, height/2}; size_t planeBytesPerRow[3] = {stride, stride/2, stride/2}; CVPixelBufferRef pixelBuffer = NULL; CVReturn ret = CVPixelBufferCreateWithPlanarBytes(kCFAllocatorDefault, width, height, kCVPixelFormatType_420YpCbCr8PlanarFullRange, nil, width*height*1.5, 3, YUV, planeWidth, planeHeight, planeBytesPerRow, nil, nil, nil, &pixelBuffer); NSDict *opt = @{ (id)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_420YpCbCr8PlanarFullRange) }; CIImage *image = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:opt]; 

I get zero for the image. Any ideas that I am missing.

EDIT: I added locking and unlocking the base address before the call. In addition, I dumped pixelbuffer data to ensure that pixellbuffer supports the data. There seems to be something wrong with calling init. The CIImage object returns zero.

  CVPixelBufferLockBaseAddress(pixelBuffer, 0); CIImage *image = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:opt]; CVPixelBufferUnlockBaseAddress(pixelBuffer,0); 
+8
ios objective-c ios6 core-image
source share
2 answers

There should be an error message in the console: initWithCVPixelBuffer failed because the CVPixelBufferRef is not IOSurface backed . See Apple Tech Q&A QA1781 for how to create a supported CVPixelBuffer IOS interface.

Calling CVPixelBufferCreateWithBytes() or CVPixelBufferCreateWithPlanarBytes() will result in CVPixelBuffers that are not supported by IOSurface ...

... To do this, you must specify kCVPixelBufferIOSurfacePropertiesKey in the pixelBufferAttributes dictionary when creating a pixel buffer using CVPixelBufferCreate() .

 NSDictionary *pixelBufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys: [NSDictionary dictionary], (id)kCVPixelBufferIOSurfacePropertiesKey, nil]; // you may add other keys as appropriate, eg kCVPixelBufferPixelFormatTypeKey, kCVPixelBufferWidthKey, kCVPixelBufferHeightKey, etc. CVPixelBufferRef pixelBuffer; CVPixelBufferCreate(... (CFDictionaryRef)pixelBufferAttributes, &pixelBuffer); 

Alternatively, you can make IOSurface CVPixelBuffers using CVPixelBufferPoolCreatePixelBuffer() from an existing pixel buffer pool if the pixelBufferAttributes dictionary provided in CVPixelBufferPoolCreate() includes kCVPixelBufferIOSurfacePropertiesKey .

+1
source share

I am working on a similar issue and continued to find the same quote from Apple without additional information on how to work in the YUV color space. I came to the following:

By default, Core Image assumes that the processing nodes are 128 bits per pixel, linear light, pre-multiplied RGBA floating point values ​​that use the GenericRGB color space. You can specify a different working color space by providing a Quartz 2D CGColorSpace object. Please note that the working color space must be RGB based. If you have YUV data as input (or other non-RGB based data), you can use the ColorSync functions to convert to a working color space. (See the Quartz 2D Programming Guide for information on creating and using CGColorspace objects.) With 8-bit sources, YUV 4: 2: 2 Core Image can handle 240 HD layers per gigabyte. Eight-bit YUV is a native color format for a video source such as DV, MPEG, uncompressed D1, and JPEG. You need to convert the YUV color spaces to the RGB color space for Core Image.

I note that there are no YUV color spaces, only Gray and RGB; and their calibrated cousins. I'm not sure how to convert the color space, but be sure to let me know if I find out.

0
source share

All Articles