Coordinates from CIDetecter (CoreImage) are inverted relative to UIKit coordinates. IOS Face Detection has tons of tutorials, but most of them are either incomplete or corrupted coordinates. Here is what is correct: http://nacho4d-nacho4d.blogspot.com/2012/03/coreimage-and-uikit-coordinates.html
One note: the tutorial uses a small image, so the received coordinates do not need to be scaled to represent the image on the screen (UIImageView). Assuming that you are using a photo taken using an iPad camera, you’ll have to scale the coordinates by the amount of scaling of the original image (if you do not reduce its size before starting the face detection procedure, maybe a good idea). You may also need to rotate the image for the correct orientation.
In one of the answers there is a routine for rotation / scaling: Previewing the UIImagePickerController camera - portrait in a landscape application
And this answer has a good routine for finding image scale when presenting a UIImageView using "aspect fit": How to get the size of a scalable UIImage in a UIImageView?
You will need to use the scale to map the coordinates of the CIDetector from the full-size image to the thumbnail shown in the UIImageView.
source share