CGAffineTransform modifies view.bounds?

Views have both a frame (coordinates in the coordinate system of the observation) and borders (coordinates in their own coordinate system), but if you transform the view, you should not use or rely on the frame property anymore. If you use transforms, work with borders on a property, not a property of a frame, since transformations are applied to borders, but are not necessarily accurately reflected in the frame

http://iphonedevelopment.blogspot.jp/2008/10/demystifying-cgaffinetransform.html

I wanted to see what he had in mind in the previous paragraph, and print “frames” and “borders”
And I see that only the “frame” changes during the pinch.

- (IBAction)handlePinch:(UIPinchGestureRecognizer*)recognizer { NSLog(@"scale: %f, velocity: %f", recognizer.scale, recognizer.velocity); NSLog(@"Before, frame: %@, bounds: %@", NSStringFromCGRect(recognizer.view.frame), NSStringFromCGRect(recognizer.view.bounds)); recognizer.view.transform = CGAffineTransformScale(recognizer.view.transform, recognizer.scale, recognizer.scale); NSLog(@"After, frame: %@, bounds: %@", NSStringFromCGRect(recognizer.view.frame), NSStringFromCGRect(recognizer.view.bounds)); recognizer.scale = 1; } 

output: (zomming in)

 2012-07-02 14:53:51.458 GestureRec[1264:707] scale: 1.030111, velocity: 0.945660 2012-07-02 14:53:51.466 GestureRec[1264:707] Before, frame: {{0, 124}, {320, 160}}, bounds: {{0, 0}, {320, 160}} 2012-07-02 14:53:51.473 GestureRec[1264:707] After, frame: {{-4.81771, 121.591}, {329.635, 164.818}}, bounds: {{0, 0}, {320, 160}} 2012-07-02 14:53:51.480 GestureRec[1264:707] scale: 1.074539, velocity: 1.889658 2012-07-02 14:53:51.484 GestureRec[1264:707] Before, frame: {{-4.81771, 121.591}, {329.635, 164.818}}, bounds: {{0, 0}, {320, 160}} 2012-07-02 14:53:51.494 GestureRec[1264:707] After, frame: {{-17.103, 115.449}, {354.206, 177.103}}, bounds: {{0, 0}, {320, 160}} 2012-07-02 14:53:51.499 GestureRec[1264:707] scale: 1.000000, velocity: 1.889658 2012-07-02 14:53:51.506 GestureRec[1264:707] Before, frame: {{-17.103, 115.449}, {354.206, 177.103}}, bounds: {{0, 0}, {320, 160}} 2012-07-02 14:53:51.510 GestureRec[1264:707] After, frame: {{-17.103, 115.449}, {354.206, 177.103}}, bounds: {{0, 0}, {320, 160}} 

I don’t understand something or is the author of the blog wrong?

+7
source share
1 answer

I think I did it:

The blog is right, even Apple itself :

To translate or scale the coordinate system, you change the representation of bounds rectangle ...

However, the borders do not change, because the rectangle itself is still the same size that you reported, only with a coordinate system. You see:

Changing the rectangle of borders sets the basic coordinate system with which the entire drawing, made according to the representation, begins.

and because we never change boundaries, it changes only the frame of representation regarding its observation.

In fact, I can mathematically prove that boundaries never change! Run this sample here:

 NSLog(@"scale: %f, velocity: %f", recognizer.scale, recognizer.velocity); NSLog(@"Before, frame: %@, bounds: %@", NSStringFromCGRect(recognizer.view.frame), NSStringFromCGRect(recognizer.view.bounds)); recognizer.view.transform = CGAffineTransformScale(recognizer.view.transform, recognizer.scale, recognizer.scale); NSLog(@"After, frame: %@, bounds: %@ transform:%@", NSStringFromCGRect(recognizer.view.frame), NSStringFromCGRect(recognizer.view.bounds), NSStringFromCGAffineTransform(recognizer.view.transform)); recognizer.scale = 1; 

Now, if you notice, the values ​​reported from NSStringFromCGAffineTransform() multiplied by the borders are equivalent to the view frame. But what about what the blog said? Not necessarily true. Transformation matrices can be applied not only to x and y of this thing, if we really wanted to, we could convert z values ​​and rotate as well as flip, and each of them all change the frame property in non-linear ways, especially when used in tandom.

An interesting puzzle, if I say so myself.

+5
source

All Articles