Correctly crop the image obtained from the photo library

I have been working on this all day, and looked at a lot of questions here on SO and google, but so far I can’t come up with something completely right.

I took a photo on an iPad running iOS 5.1.1 and cropped it using the Photos app. Then I get a link to it from the resource library and get a full resolution image that is not cropped.

I found that crop information is contained in the AdjustmentXMP metadata key on my ALAssetRepresentation object.

So, I crop the photo using XMP information, and this is what I get:

Original photo (1,936 x 2,592):
Original photo

Correctly cropped photo, as shown in the Photos app (1,420 x 1,938):
Properly Cropped Photo

Photo cropped with code below
(also 1420 x 1.938, but cropped about 200 pixels too far to the right):
Problemm

This is the XMP data from the photo:

 <x:xmpmeta xmlns:x="adobe:ns:meta/" x:xmptk="XMP Core 4.4.0"> <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"> <rdf:Description rdf:about="" xmlns:aas="http://ns.apple.com/adjustment-settings/1.0/"> <aas:AffineA>1</aas:AffineA> <aas:AffineB>0</aas:AffineB> <aas:AffineC>0</aas:AffineC> <aas:AffineD>1</aas:AffineD> <aas:AffineX>-331</aas:AffineX> <aas:AffineY>-161</aas:AffineY> <aas:CropX>0</aas:CropX> <aas:CropY>0</aas:CropY> <aas:CropW>1938</aas:CropW> <aas:CropH>1420</aas:CropH> </rdf:Description> </rdf:RDF> </x:xmpmeta> 

Here is the code I use to crop the photo:

 ALAssetRepresentation *rep = // Get asset representation CGImageRef defaultImage = [rep fullResolutionImage]; // Values obtained from XMP data above: CGRect cropBox = CGRectMake(0, 0, 1938, 1420); CGAffineTransform transform = CGAffineTransformMake(1, 0, 0, 1, 331, 161); // Apply the Affine Transform to the crop box: CGRect transformedCropBox = CGRectApplyAffineTransform(cropBox, transform); // Created a new cropped image: CGImageRef croppedImage = CGImageCreateWithImageInRect(defaultImage, transformedCropBox); // Create the UIImage: UIImage *image = [UIImage imageWithCGImage:croppedImage scale:[rep scale] orientation:[rep orientation]]; CGImageRelease(croppedImage); 

I reproduced the problem with multiple images. If I just use fullScreenImage , it displays fine, but I need a full-sized image.

+8
ios objective-c alassetslibrary cgimage alasset
source share
1 answer

It's complicated! There seems to be no documentation for this XMP data, so we will have to guess how to interpret it. There are a number of options that can be made, and improper use can lead to erroneous results.

TL; DR: Theoretically, your code looks correct, but in practice it gives the wrong result, and there is a fairly obvious adjustment that we can try.

Orientation

Image files may contain additional metadata indicating whether (and how) the raw image data is rotated and / or rotated when displayed. UIImage expresses this with the imageOrientation property and ALAssetRepresentation similar .

However, CGImage is just bitmap images, without preserving the orientation in them. -[ALAssetRepresentation fullResolutionImage] gives you a CGImage in its original orientation without any adjustments.

In your case, orientation is 3 , which means ALAssetOrientationRight or UIImageOrientationRight . Viewer software (such as UIImage ) looks at this value, sees that the image is oriented 90 ° to the right (clockwise), and then rotates it 90 ° to the left (counterclockwise) before displaying it. Or, say, in a different way, CGImage rotates 90 ° clockwise from the image you are viewing on the screen.

(To verify this, get the width and height of CGImage using CGImageGetWidth() and CGImageGetHeight() . You should find that CGImage has a width of 2592 and a height of 1936. This is rotated 90 ° from an ALAssetRepresentation whose dimensions should be 1936 wide by 2592. You can also create a UIImage from CGImage using the normal UIImageOrientationUp orientation, write the UIImage to a file, and see how it looks.)

The values ​​in the XMP dictionary seem to refer to the orientation of the CGImage . For example, the crop rectangle is wider than tall, the X translation is larger than the translation Y, etc. Has the meaning.

Coordinate system

We must also decide in which coordinate system the XMP values ​​should be. Most likely, one of these two:

  • Cartesian : the origin is in the lower left corner of the image, X increases to the right, and Y increases up. This is the system that Core Graphics typically uses.
  • "Flipped": the origin is in the upper left corner of the image, X increases to the right, and Y increases down. This is the system that UIKit usually uses. Surprisingly, unlike most CGs, CGImageCreateWithImageInRect() interprets the rect argument in this way.

Assume that the “inverted” is correct, since it is generally more convenient. Anyway, your code is already trying to do it this way.

XMP Dictionary Interpretation

The dictionary contains an affine transformation and a rectangle. Suppose it should be interpreted in the following order:

  • Apply conversion
  • Draw the image in its natural rectangle (0,0, w, h)
  • Cancel conversion (place conversion stack)
  • Trim crop

If we try this manually, the numbers seem to work. Here's a rough diagram, with a straight environment in transparent purple:

diagram for flipped case

Now for some code

In fact, we should not follow these exact steps in terms of calling CG, but we must act as if we had.

We just want to call CGImageCreateWithImageInRect , and it’s pretty obvious how to calculate the corresponding edge of the rectangle (331,161,1938,1420) . Your code looks correct.

If we crop the image from this rectangle, then create a UIImage from it (indicating the correct orientation, UIImageOrientationRight ), then we should get the correct results.

But the results are wrong! What you received was as if we were doing operations in a Cartesian coordinate system:

diagram for cartesian case

Alternatively, as if the image was rotated in the opposite direction, UIImageOrientationLeft , but we kept the same crop of the rectangle:

diagram for oriented-left case

Correction

This is all very strange, and I don’t understand what went wrong, although I would love to.

But the fix seems pretty simple: just flip the rect clip. After calculating it as above:

 // flip the transformedCropBox in the image transformedCropBox.origin.y = CGImageGetHeight(defaultImage) - CGRectGetMaxY(transformedCropBox); 

It works? (For this case and for images with different orientations?)

+11
source share

All Articles