Open video processing cv ios

I am trying to make a tutorial found here for processing ios videos using openCv framework.

I have successfully downloaded the iOS openCv framework into my project - but there seems to be a mismatch between my wireframe and the one presented in the tutorial, and I hope someone can help me.

OpenCv uses the cv::Mat type to represent images. When using delegation of AV-solutions for processing images from the camera - I will need to convert all CMSampleBufferRef to this type.

It seems that the openCV framework presented in the tutorial provides a library using

 #import <opencv2/highgui/cap_ios.h> 

with the new delegate command:

Can someone tell me where I can find this structure or maybe a quick conversion between CMSampleBufferRef and cv::Mat

EDIT

There are many segmentations in the opencv framework (at least for ios). I downloaded it through various "official" sites, and also used tools like fink and brew using THEIR instructions. I even compared the header files that were installed in / usr / local / include / opencv /. Each time they were different. When loading an openCV project in one project, there are different cmake files and conflicting readme files. I think I managed to create a good version for IOS with the avcapture function built into the framework (with this header <opencv2/highgui/cap_ios.h> ) through this link and then building the library using a python script in the ios directory - using the command python opencv/ios/build_framework.py ios . I will try to update

+4
ios frameworks opencv
source share
2 answers

Here is the conversion I'm using. You lock the pixel buffer, create cv :: Mat, process cv :: Mat, and then unlock the pixel buffer.

 - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); CVPixelBufferLockBaseAddress( pixelBuffer, 0 ); int bufferWidth = CVPixelBufferGetWidth(pixelBuffer); int bufferHeight = CVPixelBufferGetHeight(pixelBuffer); int bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer); unsigned char *pixel = (unsigned char *)CVPixelBufferGetBaseAddress(pixelBuffer); cv::Mat image = cv::Mat(bufferHeight,bufferWidth,CV_8UC4,pixel, bytesPerRow); //put buffer in open cv, no memory copied //Processing here //End processing CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 ); } 

The above method does not copy memory and thus you do not own the memory, pixelBuffer will free it for you. If you need your own copy of the buffer just do

 cv::Mat copied_image = image.clone(); 
+12
source share

This is an updated version of the code in the previous accepted answer, which should work with any iOS device.

Since bufferWidth not equal to bytePerRow at least on iPhone 6 and iPhone 6+, we need to specify the number of bytes in each row as the last argument in the cv :: Mat constructor.

 CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); CVPixelBufferLockBaseAddress(pixelBuffer, 0); int bufferWidth = CVPixelBufferGetWidth(pixelBuffer); int bufferHeight = CVPixelBufferGetHeight(pixelBuffer); int bytePerRow = CVPixelBufferGetBytesPerRow(pixelBuffer); unsigned char *pixel = (unsigned char *) CVPixelBufferGetBaseAddress(pixelBuffer); cv::Mat image = cv::Mat(bufferHeight, bufferWidth, CV_8UC4, pixel, bytePerRow); // Process you cv::Mat here CVPixelBufferUnlockBaseAddress(pixelBuffer, 0); 

The code has been tested on my iPhone5, iPhone6 ​​and iPhone6 ​​+ with iOS 10.

+4
source share

All Articles