I am trying to detect the edges of a business card (and draw them) using an iPhone camera using openCV. I am new to this framework, as well as Computer Vision or C ++.
I am trying to use the solution described here: https://github.com/foundry/OpenCVSquares github project https://github.com/foundry/OpenCVSquares
It works with a predefined image, but I'm trying to get it to work with the camera.
To do this, I use the CvVideoCameraDelegate protocol , implementing it in CVViewController.mm , as they explain in http://docs.opencv.org/doc/tutorials/ios/video_processing/video_processing.html , like this
#ifdef __cplusplus -(void)processImage:(cv::Mat &)matImage {
EDIT:
If I do this, this gives me EXC_BAD_ACCESS ...
If I clone matImage before processing it, registering it, it processes the image and even finds the rectangles, but the rectangle does not access the output of the image in the image.
cv::Mat temp = matImage.clone(); dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ UIImage *image = [[UIImage alloc]CVSquares::detectedSquaresInImage(temp, self.tolerance, self.threshold, self.levels, [self accuracy]) orientation:UIImageOrientationDown]; dispatch_async(dispatch_get_main_queue(), ^{ self.imageView.image = image; }); });
I am sure that something is missing from me, perhaps because I am passing an object incorrectly, not a pointer to an object or so, and the object I need to change is not.
In any case, if this is the wrong approach, I would really appreciate a tutorial or an example where they do something like this, either using openCV or GPUImage (not familiar with this) ...