Convert CGPoint results returned from CIFaceFeature

I am trying to figure out how to convert the CGPoint results returned from CIFaceFeature to paint with them in CALayer . I used to normalize my image to have a 0 turn to simplify the task, but this creates problems for images taken with a device in landscape mode.

I have been working on this for some time without success, and I'm not sure that my understanding of the problem is wrong, or my approach is wrong, or both. Here is what I think is right:

original image from camera

According to the documentation for the CIDetector featuresInImage:options: method featuresInImage:options:

 A dictionary that specifies the orientation of the image. The detection is adjusted to account for the image orientation but the coordinates in the returned feature objects are based on those of the image. 

image as displayed in UIImageView

In the code below, I am trying to rotate CGPoint to draw it through a CAShape layer that overlays a UIImageView.

What I'm doing (... or I think I'm doing ...) translates the left eye of the CGPoint into the center of view, rotating 90 degrees, and then moving the point back to where it was. This is not true, but I do not know where I am mistaken. Is my approach wrong or the way I implement it?

 #define DEGREES_TO_RADIANS(angle) ((angle) / 180.0 * M_PI) 

- leftEyePosition is a CGPoint

 CGAffineTransform transRot = CGAffineTransformMakeRotation(DEGREES_TO_RADIANS(90)); float x = self.center.x; float y = self.center.y; CGAffineTransform tCenter = CGAffineTransformMakeTranslation(-x, -y); CGAffineTransform tOffset = CGAffineTransformMakeTranslation(x, y); leftEyePosition = CGPointApplyAffineTransform(leftEyePosition, tCenter); leftEyePosition = CGPointApplyAffineTransform(leftEyePosition, transRot); leftEyePosition = CGPointApplyAffineTransform(leftEyePosition, tOffset); 

From this post: stack overflow

Orientation

Apple / UIImage.imageOrientation Jpeg / File kCGImagePropertyOrientation

 UIImageOrientationUp = 0 = Landscape left = 1 UIImageOrientationDown = 1 = Landscape right = 3 UIImageOrientationLeft = 2 = Portrait down = 8 UIImageOrientationRight = 3 = Portrait up = 6 

Post edited by skinnyTOD - 02/113 at 16:09

+4
source share
2 answers

I need to find out the same problem. Apple's example "SquareCam" runs directly on the video output, but I need the results from UIImage. Therefore, I extended the CIFaceFeature class using some transformation methods to get the correct point locations and borders relative to the UIImage and its UIImageView (or CALayer UIView). The full implementation is available here: https://gist.github.com/laoyang/5747004 . You can use directly.

Here is the most basic conversion for a point from CIFaceFeature; the returned CGPoint is converted based on the orientation of the image:

 - (CGPoint) pointForImage:(UIImage*) image fromPoint:(CGPoint) originalPoint { CGFloat imageWidth = image.size.width; CGFloat imageHeight = image.size.height; CGPoint convertedPoint; switch (image.imageOrientation) { case UIImageOrientationUp: convertedPoint.x = originalPoint.x; convertedPoint.y = imageHeight - originalPoint.y; break; case UIImageOrientationDown: convertedPoint.x = imageWidth - originalPoint.x; convertedPoint.y = originalPoint.y; break; case UIImageOrientationLeft: convertedPoint.x = imageWidth - originalPoint.y; convertedPoint.y = imageHeight - originalPoint.x; break; case UIImageOrientationRight: convertedPoint.x = originalPoint.y; convertedPoint.y = originalPoint.x; break; case UIImageOrientationUpMirrored: convertedPoint.x = imageWidth - originalPoint.x; convertedPoint.y = imageHeight - originalPoint.y; break; case UIImageOrientationDownMirrored: convertedPoint.x = originalPoint.x; convertedPoint.y = originalPoint.y; break; case UIImageOrientationLeftMirrored: convertedPoint.x = imageWidth - originalPoint.y; convertedPoint.y = originalPoint.x; break; case UIImageOrientationRightMirrored: convertedPoint.x = originalPoint.y; convertedPoint.y = imageHeight - originalPoint.x; break; default: break; } return convertedPoint; } 

And here are the category methods based on the above conversion:

 // Get converted features with respect to the imageOrientation property - (CGPoint) leftEyePositionForImage:(UIImage *)image; - (CGPoint) rightEyePositionForImage:(UIImage *)image; - (CGPoint) mouthPositionForImage:(UIImage *)image; - (CGRect) boundsForImage:(UIImage *)image; // Get normalized features (0-1) with respect to the imageOrientation property - (CGPoint) normalizedLeftEyePositionForImage:(UIImage *)image; - (CGPoint) normalizedRightEyePositionForImage:(UIImage *)image; - (CGPoint) normalizedMouthPositionForImage:(UIImage *)image; - (CGRect) normalizedBoundsForImage:(UIImage *)image; // Get feature location inside of a given UIView size with respect to the imageOrientation property - (CGPoint) leftEyePositionForImage:(UIImage *)image inView:(CGSize)viewSize; - (CGPoint) rightEyePositionForImage:(UIImage *)image inView:(CGSize)viewSize; - (CGPoint) mouthPositionForImage:(UIImage *)image inView:(CGSize)viewSize; - (CGRect) boundsForImage:(UIImage *)image inView:(CGSize)viewSize; 

(Another thing to note is to specify the correct EXIF ​​orientation when extracting face functions from the UIImage orientation. Pretty confusing ... here's what I did:

 int exifOrientation; switch (self.image.imageOrientation) { case UIImageOrientationUp: exifOrientation = 1; break; case UIImageOrientationDown: exifOrientation = 3; break; case UIImageOrientationLeft: exifOrientation = 8; break; case UIImageOrientationRight: exifOrientation = 6; break; case UIImageOrientationUpMirrored: exifOrientation = 2; break; case UIImageOrientationDownMirrored: exifOrientation = 4; break; case UIImageOrientationLeftMirrored: exifOrientation = 5; break; case UIImageOrientationRightMirrored: exifOrientation = 7; break; default: break; } NSDictionary *detectorOptions = @{ CIDetectorAccuracy : CIDetectorAccuracyHigh }; CIDetector *faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:detectorOptions]; NSArray *features = [faceDetector featuresInImage:[CIImage imageWithCGImage:self.image.CGImage] options:@{CIDetectorImageOrientation:[NSNumber numberWithInt:exifOrientation]}]; 

)

+6
source

I think you need to flip the found coordinate faces around the horizontal central axis of the image

Can you try with this conversion:

 CGAffineTransform transform = CGAffineTransformIdentity; transform = CGAffineTransformTranslate(transform, 0.0f, image.size.height); transform = CGAffineTransformScale(transform, 1.0f, -1.0f); [path applyTransform:transform]; 

This conversion only works if we set image.imageOrientation to 0 before looking for faces.

0
source

All Articles