Anti-aliasing from CALayer renderInContext

I have a very simple UIView containing several black and white UIImageView s. If I take a screenshot using the physical buttons on the device, the resulting image will look exactly as I see (as expected) - if I examine the image at the pixel level, it is only black and white.

However, if I use the following code snippet to perform the same action programmatically, the result is what seems to be used for smoothing - all black pixels are surrounded by faint gray halos. In my original scene, there is no gray - it is purely black and white, and the size of the โ€œscreen shotโ€ image is the same as me, which I generate programmatically, but I canโ€™t understand where the gray halo comes from.

 UIView *printView = fullView; UIGraphicsBeginImageContextWithOptions(printView.bounds.size, NO, 0.0); CGContextRef ctx = UIGraphicsGetCurrentContext(); [printView.layer renderInContext:ctx]; UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil); UIGraphicsEndImageContext(); 

I tried adding the following before calling renderInContext in an attempt to prevent anti-aliasing, but it has no noticeable effect:

 CGContextSetShouldAntialias(ctx, NO); CGContextSetAllowsAntialiasing(ctx, NO); CGContextSetInterpolationQuality(ctx, kCGInterpolationHigh); 

Here is an example of two different outputs - the left side is what my code produces, and the right side is a regular iOS screenshot:

enter image description here

As I try to send the output of my renderInContext to a monochrome printer, the presence of gray pixels causes some ugly artifacts due to the smoothing algorithm of the printer.

So, how can I get renderInContext to get the same pixel level output of my views as a screenshot of a real device - i.e. only black and white, how was it in my original scene?

+8
ios calayer
source share
2 answers

Turns out the issue was related to the resolution of the underlying UIImage used by the UIImageView . UIImage was a CGImage created using a data provider. The dimensions of the CGImage were specified in the same units as the parent UIImageView , however I am using an iOS device with a retina display.

Since the dimensions of the CGImage were specified in a size other than the retina, renderInContext increased the scaling of the CGImage, and this scaling seems to behave differently than what was done using the actual rendering of the screen. (For some reason, the actual rendering of the screen scales without adding gray pixels.)

To fix this, I created my CGImage with twice the size of the UIImageView , and then my call to renderInContext creates a much better black and white image. There are a few more gray pixels in a certain area of โ€‹โ€‹white, but this is a significant improvement over the original problem.

I finally figured this out by changing the call to UIGraphicsBeginImageContextWithOptions() to force it to do 1.0 scaling and noticed that the rendering of the black pixel of the UIImageView no longer had a gray halo. When I forced UIGraphicsBeginImageContextWithOptions() to a scale factor of 2.0 (which was the default because of the retina display), a gray halo appeared.

+4
source share

I would try to install

  printView.layer.magnificationFilter 

and

  printView.layer.minificationFilter 

to

  kCAFilterNearest 

Are images displayed in instances of UIImageView ? Is printView its oversight?

+1
source share

All Articles