Do I have to multiply the scale with points to display the retina, in this case?

From the retina display, all of a sudden this piece of drawing code doesn't seem to work anymore. The drawn image is slightly offset than before, and looks somewhat stretched.

I draw something in the -drawRect: subclass of UIControl. I realized that the current scale inside this UIControl is really 2.0. This code gets CGImage from UIImage, which probably knows nothing about scale. It is passed as a parameter to a method that also takes some point values ​​right now.

CGContextDrawImage(context, CGRectMake(drawingRect.origin.x, drawingRect.origin.y, img.size.width, img.size.height), [img CGImage]); 

Note: drawingRect is in points. img.size.width inside NSLog displays the correct value in points, while [img CGImage] displays the @ 2x image to display the retina. I checked this:

 NSLog(@"image height = %f (CGImage = %d)", img.size.height, CGImageGetHeight([img CGImage])); 

Console Output: image height = 31.000000 (CGImage = 62)

How can I handle the @ 2x image here? Do I have to multiply each value using a scale manually? But would this also ruin the visible rectangle on the screen, or not?

+4
source share
1 answer

Yes.

 CGImageGetWidth([image CGImage]) == image.size.width * image.scale CGImageGetHeight([image CGImage]) == image.size.height * image.scale 

Alternatively, you can use -[UIImage drawAtPoint:] , -[UIImage drawInRect:] and other similar methods that automatically handle the scale. If you go down to CGImage , you have to handle the scale yourself.

+3
source

All Articles