An attempt to dynamically color transparent UIImages, but continues to get a blurry result. What am I doing wrong?

My iPhone application has a custom one UITableViewCell, each of which has an icon. In the normal state of the cell, these icons are black with a transparent background. Instead of associating the second set of inverted icons with the application for the selected state (white on a transparent background), I would like to invert these icons on the fly using Core Graphics whenever the user touches the corresponding cell in the table.

I found several other answers related to superimposing a UIImageon the color or repainting UIImages, but all of these methods produce a blurry result for me (see below). I tried all sorts of CGBlendModes and also manually calculated a more accurate mask (maybe I did it wrong), but it seems that the translucent pixels around the edges of my icons get their opacity, borked or basically discarded - giving a choppy / blurry look. I am at a loss for what I am doing wrong.

It is also not very important to change all my icons so that they are just black / white without transparency. I need icons to sit on a transparent background so that they can overlap on top of other user interface elements ..

The code (kindly provided by Chadwick Wood) that I use to invert the icon (I call this method on each of my source icons and pass [UIColor whiteColor]in as the second argument) and the example output (on iPhone 4 with iOS 4.1) below (ignore the blue background of the highlighted images is the selected background of the selected table cell).

Any help is greatly appreciated.

Example input and output:

Icon before & after programmatic recoloring.

@implementation UIImage(FFExtensions)

+ (UIImage *)imageNamed:(NSString *)name withColor:(UIColor *)color {

 // load the image
 UIImage *img = [UIImage imageNamed:name];

 // begin a new image context, to draw our colored image onto
 UIGraphicsBeginImageContext(img.size);

 // get a reference to that context we created
 CGContextRef context = UIGraphicsGetCurrentContext();
 CGContextSetInterpolationQuality(context, kCGInterpolationHigh);

 // set the fill color
 [color setFill];

 // translate/flip the graphics context (for transforming from CG* coords to UI* coords
 CGContextTranslateCTM(context, 0, img.size.height);
 CGContextScaleCTM(context, 1.0, -1.0);

 // set the blend mode to color burn, and the original image
 CGContextSetBlendMode(context, kCGBlendModeMultiply);
 CGRect rect = CGRectMake(0, 0, img.size.width, img.size.height);
 //CGContextDrawImage(context, rect, img.CGImage);

 // set a mask that matches the shape of the image, then draw (color burn) a colored rectangle
 CGContextClipToMask(context, rect, img.CGImage);
 CGContextAddRect(context, rect);
 CGContextDrawPath(context,kCGPathFill);


 // generate a new UIImage from the graphics context we drew onto
 UIImage *coloredImg = UIGraphicsGetImageFromCurrentImageContext();
 UIGraphicsEndImageContext();

 //return the color-burned image
 return coloredImg;
}

@end
+5
source share
1 answer

Thanks to the opinion of Peter and Stephen that the resolution of the output image was lower than the input, I realized that when creating the image context, I did not take into account the scale factor of the screen.

Line change:

UIGraphicsBeginImageContext(img.size);

to

UIGraphicsBeginImageContextWithOptions(img.size, NO, [UIScreen mainScreen].scale);

fixes the problem.

+5
source

All Articles