IOS detects rectangles from camera using openCV

I am trying to detect the edges of a business card (and draw them) using an iPhone camera using openCV. I am new to this framework, as well as Computer Vision or C ++.

I am trying to use the solution described here: https://github.com/foundry/OpenCVSquares github project https://github.com/foundry/OpenCVSquares

It works with a predefined image, but I'm trying to get it to work with the camera.

To do this, I use the CvVideoCameraDelegate protocol , implementing it in CVViewController.mm , as they explain in http://docs.opencv.org/doc/tutorials/ios/video_processing/video_processing.html , like this

 #ifdef __cplusplus -(void)processImage:(cv::Mat &)matImage { //NSLog (@"Processing Image"); dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ matImage = CVSquares::detectedSquaresInImage(matImage, self.tolerance, self.threshold, self.levels, [self accuracy]); UIImage *image = [[UIImage alloc]initWithCVMat:matImage orientation:UIImageOrientationDown]; dispatch_async(dispatch_get_main_queue(), ^{ self.imageView.image = image; }); }); } #endif 

EDIT:

If I do this, this gives me EXC_BAD_ACCESS ...

If I clone matImage before processing it, registering it, it processes the image and even finds the rectangles, but the rectangle does not access the output of the image in the image.

 cv::Mat temp = matImage.clone(); dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ UIImage *image = [[UIImage alloc]CVSquares::detectedSquaresInImage(temp, self.tolerance, self.threshold, self.levels, [self accuracy]) orientation:UIImageOrientationDown]; dispatch_async(dispatch_get_main_queue(), ^{ self.imageView.image = image; }); }); 

I am sure that something is missing from me, perhaps because I am passing an object incorrectly, not a pointer to an object or so, and the object I need to change is not.

In any case, if this is the wrong approach, I would really appreciate a tutorial or an example where they do something like this, either using openCV or GPUImage (not familiar with this) ...

+7
ios iphone opencv computer-vision
source share
1 answer

So the solution was pretty simple ...

Instead of using matImage to set imageView.image , you just had to convert matImage to actually change to imageView, since CvVideoCamera was already initialized (and linked to) the image:

self.videoCamera = [[CvVideoCamera alloc]initWithParentView:self.imageView];

finally, the function was like this:

 #ifdef __cplusplus -(void)processImage:(cv::Mat &)matImage { matImage = CVSquares::detectedSquaresInImage(matImage, self.angleTolerance, self.threshold, self.levels, self.accuracy); } #endif 
0
source share

All Articles