Cropping an image in iOS using OpenCV face detection

I used the code below to crop a face from an image in my face detection code. But I do not get the correct images of the face, and I get the part of the image saved, not the face. What is wrong in my code?

_faceCascade.detectMultiScale(mat, faces, 1.1, 2, kHaarOptions, cv::Size(40, 40)); 

And inside the displayfaces functions, cropping is done using this code:

 CGRect cropRect = CGRectMake(faces[i].x, faces[i].y, faces[i].width, faces[i].width); CGImageRef cropped_img = CGImageCreateWithImageInRect(self.storeImage.CGImage, cropRect); UIImage *img = [UIImage imageWithCGImage:cropped_img]; UIImageWriteToSavedPhotosAlbum( img, self, nil,nil); 

We get the correct coordinates of faces[i] . But the problem is only with trimming and setting the ROI. Can someone help me in solving it?

I tried with the code below as well, getting the same images again. (that is, I do not get the actual face image)

 cv :: Mat image_roi; cv::Rect roi(faces[i].x, faces[i].y, faces[i].width, faces[i].height); cv::Mat(testMat, roi).copyTo(image_roi); UIImage *img = [CaptureViewController imageWithCVMat:image_roi ]; UIImageWriteToSavedPhotosAlbum( img, self, nil,nil); 

Note. I detect a face in a live video stream using opencv faceetect. I can get a green rectangle around my face. But I could not trim the face with the parameters of the face. I also tried to install faceroi for eye detection, even if it fails. To narrow down the problem, could the problem be setting the ROI for the image?

Updated 11/02/13:

I have Done Cropping, as shown below, but the ROI is not set correctly, and the image is not cropped well:

I found a problem for my above user (thanks @Jameo for telling me that this is due to a rotation problem.) I rotated the image as shown below.

 UIImage *rotateImage = [[UIImage alloc] initWithCGImage: image.CGImage scale: 1.0 orientation: UIImageOrientationRight]; 

And cropped the image using the code below:

 // get sub image + (UIImage*) getSubImageFrom: (UIImage*) img WithRect: (CGRect) rect { UIGraphicsBeginImageContext(rect.size); CGContextRef context = UIGraphicsGetCurrentContext(); // translated rectangle for drawing sub image CGRect drawRect = CGRectMake(-rect.origin.x, -rect.origin.y, img.size.width, img.size.height); // clip to the bounds of the image context // not strictly necessary as it will get clipped anyway? CGContextClipToRect(context, CGRectMake(0, 0, rect.size.width, rect.size.height)); // draw image [img drawInRect:drawRect]; // grab image UIImage* subImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return subImage; } 

Image is cropped, but not with actual coordinates.

My observations: 1) Crop the image using FaceRect, which is returned by Affinetransform. Will it cause incorrect coordinates or due to an error in my code? 2) I could not set the ROI for the image before installing Affine Transform, which is the reason. Is this a procedure to determine the ROI?

 faceRect = CGRectApplyAffineTransform(faceRect, t); 

But still, cropping is not performed correctly. The difference is shown below:

Full image:

Full image

Cropped image

enter image description here

+4
source share
2 answers

how about this?

 - (UIImage *) getSubImageFrom:(UIImage *)imageToCrop WithRect:(CGRect)rect { CGImageRef imageRef = CGImageCreateWithImageInRect([imageToCrop CGImage], rect); UIImage *cropped = [UIImage imageWithCGImage:imageRef]; CGImageRelease(imageRef); return cropped; } 
+1
source

try this to get face coordinates:

 CGRect newBounds = CGRectMake(faceFeature.bounds.origin.x, _picture.size.height - faceFeature.bounds.origin.y - largestFace.bounds.size.height, faceFeature.bounds.size.width, faceFeature.bounds.size.height); 

and then crop the image using:

 CGImageRef subImage = CGImageCreateWithImageInRect(image.CGImage, newBounds); UIImage *croppedImage = [UIImage imageWithCGImage:subImage]; UIImageView *newView = [[UIImageView alloc] initWithImage:croppedImage]; 
0
source

All Articles