From iOS6, Apple provided the ability to use native YUV for CIImage through this call
initWithCVPixelBuffer: Parameters:
In the main image programming guide, they mentioned this feature
Take advantage of YUV image support in iOS 6.0 and later. Camera pixel buffers are a natural YUV, but most of the image processing algorithms expect RBGA data. There is a conversion cost between the two. Core Image supports reading YUB from CVPixelBuffer objects and applying the appropriate color conversion.
options = @ {(id) kCVPixelBufferPixelFormatTypeKey: @ (kCVPixelFormatType_420YpCvCr88iPlanarFullRange)};
But I can not use it correctly. I have raw YUV data. So this is what I did
void *YUV[3] = {data[0], data[1], data[2]}; size_t planeWidth[3] = {width, width/2, width/2}; size_t planeHeight[3] = {height, height/2, height/2}; size_t planeBytesPerRow[3] = {stride, stride/2, stride/2}; CVPixelBufferRef pixelBuffer = NULL; CVReturn ret = CVPixelBufferCreateWithPlanarBytes(kCFAllocatorDefault, width, height, kCVPixelFormatType_420YpCbCr8PlanarFullRange, nil, width*height*1.5, 3, YUV, planeWidth, planeHeight, planeBytesPerRow, nil, nil, nil, &pixelBuffer); NSDict *opt = @{ (id)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_420YpCbCr8PlanarFullRange) }; CIImage *image = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:opt];
I get zero for the image. Any ideas that I am missing.
EDIT: I added locking and unlocking the base address before the call. In addition, I dumped pixelbuffer data to ensure that pixellbuffer supports the data. There seems to be something wrong with calling init. The CIImage object returns zero.
CVPixelBufferLockBaseAddress(pixelBuffer, 0); CIImage *image = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:opt]; CVPixelBufferUnlockBaseAddress(pixelBuffer,0);