How to convert from CVPixelBufferRef to openCV cv :: Mat

I would like to perform some operations with CVPixelBufferRef and exit with cv::Mat

  • Harvest in the area of ​​interest.
  • scaled to a fixed size
  • compared the histogram
  • convert to grayscale - 8 bits per pixel ( CV_8UC1 )

I'm not sure what the most efficient order for this is, however, I know that all operations are available on an open: CV matrix, so I would like to know how to convert it.

 - (void) captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); cv::Mat frame = f(pixelBuffer); // how do I implement f()? 
+9
c ++ ios objective-c opencv avcapturesession
source share
2 answers

I found the answer in some excellent GitHub source code . I adapted it here for simplicity. It also does grayscale conversion for me.

 CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); OSType format = CVPixelBufferGetPixelFormatType(pixelBuffer); // Set the following dict on AVCaptureVideoDataOutput videoSettings to get YUV output // @{ kCVPixelBufferPixelFormatTypeKey : kCVPixelFormatType_420YpCbCr8BiPlanarFullRange } NSAssert(format == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, @"Only YUV is supported"); // The first plane / channel (at index 0) is the grayscale plane // See more infomation about the YUV format // http://en.wikipedia.org/wiki/YUV CVPixelBufferLockBaseAddress(pixelBuffer, 0); void *baseaddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0); CGFloat width = CVPixelBufferGetWidth(pixelBuffer); CGFloat height = CVPixelBufferGetHeight(pixelBuffer); cv::Mat mat(height, width, CV_8UC1, baseaddress, 0); // Use the mat here CVPixelBufferUnlockBaseAddress(pixelBuffer, 0); 

I think the best order would be:

  • Convert to shades of gray (as this is done almost automatically)
  • Crop (this should be an operational operation and will reduce the number of pixels to work)
  • Zoom out
  • Histogram equalization
+12
source share

I am using this. My cv:Mat configured in BGR (8UC3) colorFormat.

CVImageBufferRef β†’ cv :: Mat

 - (cv::Mat) matFromImageBuffer: (CVImageBufferRef) buffer { cv::Mat mat ; CVPixelBufferLockBaseAddress(buffer, 0); void *address = CVPixelBufferGetBaseAddress(buffer); int width = (int) CVPixelBufferGetWidth(buffer); int height = (int) CVPixelBufferGetHeight(buffer); mat = cv::Mat(height, width, CV_8UC4, address, 0); //cv::cvtColor(mat, _mat, CV_BGRA2BGR); CVPixelBufferUnlockBaseAddress(buffer, 0); return mat; } 

cv :: Mat β†’ CVImageBufferRef (CVPixelBufferRef)

 - (CVImageBufferRef) getImageBufferFromMat: (cv::Mat) mat { cv::cvtColor(mat, mat, CV_BGR2BGRA); int width = mat.cols; int height = mat.rows; NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: // [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey, // [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, [NSNumber numberWithInt:width], kCVPixelBufferWidthKey, [NSNumber numberWithInt:height], kCVPixelBufferHeightKey, nil]; CVPixelBufferRef imageBuffer; CVReturn status = CVPixelBufferCreate(kCFAllocatorMalloc, width, height, kCVPixelFormatType_32BGRA, (CFDictionaryRef) CFBridgingRetain(options), &imageBuffer) ; NSParameterAssert(status == kCVReturnSuccess && imageBuffer != NULL); CVPixelBufferLockBaseAddress(imageBuffer, 0); void *base = CVPixelBufferGetBaseAddress(imageBuffer) ; memcpy(base, mat.data, _mat.total()*4); CVPixelBufferUnlockBaseAddress(imageBuffer, 0); return imageBuffer; } 
+3
source share

All Articles