I have a very simple UIView containing several black and white UIImageView s. If I take a screenshot using the physical buttons on the device, the resulting image will look exactly as I see (as expected) - if I examine the image at the pixel level, it is only black and white.
However, if I use the following code snippet to perform the same action programmatically, the result is what seems to be used for smoothing - all black pixels are surrounded by faint gray halos. In my original scene, there is no gray - it is purely black and white, and the size of the โscreen shotโ image is the same as me, which I generate programmatically, but I canโt understand where the gray halo comes from.
UIView *printView = fullView; UIGraphicsBeginImageContextWithOptions(printView.bounds.size, NO, 0.0); CGContextRef ctx = UIGraphicsGetCurrentContext(); [printView.layer renderInContext:ctx]; UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil); UIGraphicsEndImageContext();
I tried adding the following before calling renderInContext in an attempt to prevent anti-aliasing, but it has no noticeable effect:
CGContextSetShouldAntialias(ctx, NO); CGContextSetAllowsAntialiasing(ctx, NO); CGContextSetInterpolationQuality(ctx, kCGInterpolationHigh);
Here is an example of two different outputs - the left side is what my code produces, and the right side is a regular iOS screenshot:

As I try to send the output of my renderInContext to a monochrome printer, the presence of gray pixels causes some ugly artifacts due to the smoothing algorithm of the printer.
So, how can I get renderInContext to get the same pixel level output of my views as a screenshot of a real device - i.e. only black and white, how was it in my original scene?
ios calayer
Brian stormon
source share