UIImage Face Detection

I am trying to write a procedure that takes a UIImage and returns a new UIImage that contains only the face. It seems very simple, but my brain is having trouble moving through the CoreImage and UIImage spaces.

Here is the basic information:

- (UIImage *)imageFromImage:(UIImage *)image inRect:(CGRect)rect { CGImageRef sourceImageRef = [image CGImage]; CGImageRef newImageRef = CGImageCreateWithImageInRect(sourceImageRef, rect); UIImage *newImage = [UIImage imageWithCGImage:newImageRef]; CGImageRelease(newImageRef); return newImage; } -(UIImage *)getFaceImage:(UIImage *)picture { CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:[NSDictionary dictionaryWithObject: CIDetectorAccuracyHigh forKey: CIDetectorAccuracy]]; CIImage *ciImage = [CIImage imageWithCGImage: [picture CGImage]]; NSArray *features = [detector featuresInImage:ciImage]; // For simplicity, I'm grabbing the first one in this code sample, // and we can all pretend that the photo has one face for sure. :-) CIFaceFeature *faceFeature = [features objectAtIndex:0]; return imageFromImage:picture inRect:faceFeature.bounds; } 

The returned image is from an inverted image. I tried faceFeature.bounds up faceFeature.bounds using something like this:

 CGAffineTransform t = CGAffineTransformMakeScale(1.0f,-1.0f); CGRect newRect = CGRectApplyAffineTransform(faceFeature.bounds,t); 

... but it gives me results outside the image.

I am sure there is something simple to fix this, but not computing downward and then creating a new rectangle using this as X, is there a β€œright” way to do this?

Thanks!

+8
ios xcode core-graphics face-detection
source share
3 answers

Since there is no easy way for this, I just wrote the code for this:

 CGRect newBounds = CGRectMake(faceFeature.bounds.origin.x, _picture.size.height - faceFeature.bounds.origin.y - largestFace.bounds.size.height, faceFeature.bounds.size.width, faceFeature.bounds.size.height); 

It worked.

+3
source share

It is much easier and less random to use CIContext to crop a face from an image. Something like that:

 CGImageRef cgImage = [_ciContext createCGImage:[CIImage imageWithCGImage:inputImage.CGImage] fromRect:faceFeature.bounds]; UIImage *croppedFace = [UIImage imageWithCGImage:cgImage]; 

Where inputImage is your UIImage object and the faceFeature object is of type CIFaceFeature, which you get from the [CIDetector featuresInImage:] method.

+5
source share

There is no easy way to achieve this, the problem is that the images from the iPhone camera are always in portrait mode, and the metadata settings are used to display them correctly. You will also get better accuracy when calling face detection if you inform the rotation of the image in advance. To complicate the situation, you must give it the orientation of the image in EXIF ​​format.

Fortunately, there is a sample apple example that covers all this Squarecam , I suggest you check it out for details

0
source share

All Articles