It's complicated! There seems to be no documentation for this XMP data, so we will have to guess how to interpret it. There are a number of options that can be made, and improper use can lead to erroneous results.
TL; DR: Theoretically, your code looks correct, but in practice it gives the wrong result, and there is a fairly obvious adjustment that we can try.
Orientation
Image files may contain additional metadata indicating whether (and how) the raw image data is rotated and / or rotated when displayed. UIImage expresses this with the imageOrientation property and ALAssetRepresentation similar .
However, CGImage is just bitmap images, without preserving the orientation in them. -[ALAssetRepresentation fullResolutionImage] gives you a CGImage in its original orientation without any adjustments.
In your case, orientation is 3 , which means ALAssetOrientationRight or UIImageOrientationRight . Viewer software (such as UIImage ) looks at this value, sees that the image is oriented 90 ° to the right (clockwise), and then rotates it 90 ° to the left (counterclockwise) before displaying it. Or, say, in a different way, CGImage rotates 90 ° clockwise from the image you are viewing on the screen.
(To verify this, get the width and height of CGImage using CGImageGetWidth() and CGImageGetHeight() . You should find that CGImage has a width of 2592 and a height of 1936. This is rotated 90 ° from an ALAssetRepresentation whose dimensions should be 1936 wide by 2592. You can also create a UIImage from CGImage using the normal UIImageOrientationUp orientation, write the UIImage to a file, and see how it looks.)
The values in the XMP dictionary seem to refer to the orientation of the CGImage . For example, the crop rectangle is wider than tall, the X translation is larger than the translation Y, etc. Has the meaning.
Coordinate system
We must also decide in which coordinate system the XMP values should be. Most likely, one of these two:
- Cartesian : the origin is in the lower left corner of the image, X increases to the right, and Y increases up. This is the system that Core Graphics typically uses.
- "Flipped": the origin is in the upper left corner of the image, X increases to the right, and Y increases down. This is the system that UIKit usually uses. Surprisingly, unlike most CGs,
CGImageCreateWithImageInRect() interprets the rect argument in this way.
Assume that the “inverted” is correct, since it is generally more convenient. Anyway, your code is already trying to do it this way.
XMP Dictionary Interpretation
The dictionary contains an affine transformation and a rectangle. Suppose it should be interpreted in the following order:
- Apply conversion
- Draw the image in its natural rectangle (0,0, w, h)
- Cancel conversion (place conversion stack)
- Trim crop
If we try this manually, the numbers seem to work. Here's a rough diagram, with a straight environment in transparent purple:

Now for some code
In fact, we should not follow these exact steps in terms of calling CG, but we must act as if we had.
We just want to call CGImageCreateWithImageInRect , and it’s pretty obvious how to calculate the corresponding edge of the rectangle (331,161,1938,1420) . Your code looks correct.
If we crop the image from this rectangle, then create a UIImage from it (indicating the correct orientation, UIImageOrientationRight ), then we should get the correct results.
But the results are wrong! What you received was as if we were doing operations in a Cartesian coordinate system:

Alternatively, as if the image was rotated in the opposite direction, UIImageOrientationLeft , but we kept the same crop of the rectangle:

Correction
This is all very strange, and I don’t understand what went wrong, although I would love to.
But the fix seems pretty simple: just flip the rect clip. After calculating it as above:
It works? (For this case and for images with different orientations?)