I am trying to write a procedure that takes a UIImage and returns a new UIImage that contains only the face. It seems very simple, but my brain is having trouble moving through the CoreImage and UIImage spaces.
Here is the basic information:
- (UIImage *)imageFromImage:(UIImage *)image inRect:(CGRect)rect { CGImageRef sourceImageRef = [image CGImage]; CGImageRef newImageRef = CGImageCreateWithImageInRect(sourceImageRef, rect); UIImage *newImage = [UIImage imageWithCGImage:newImageRef]; CGImageRelease(newImageRef); return newImage; } -(UIImage *)getFaceImage:(UIImage *)picture { CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:[NSDictionary dictionaryWithObject: CIDetectorAccuracyHigh forKey: CIDetectorAccuracy]]; CIImage *ciImage = [CIImage imageWithCGImage: [picture CGImage]]; NSArray *features = [detector featuresInImage:ciImage];
The returned image is from an inverted image. I tried faceFeature.bounds up faceFeature.bounds using something like this:
CGAffineTransform t = CGAffineTransformMakeScale(1.0f,-1.0f); CGRect newRect = CGRectApplyAffineTransform(faceFeature.bounds,t);
... but it gives me results outside the image.
I am sure there is something simple to fix this, but not computing downward and then creating a new rectangle using this as X, is there a βrightβ way to do this?
Thanks!
ios xcode core-graphics face-detection
Tim sullivan
source share