High-quality UIImage Scaling

I need to scale the resolution of an image coming from a presentation layer in an iPhone application. The obvious way is to determine the scale factor in UIGraphicsBeginImageContextWithOptions, but at any time when the scale factor is not 1.0, the image quality goes to the bank - much more than you would expect from pixel loss.

I tried several other scaling methods, but all of them seem to revolve around the CGContext material, and all seem to do the same.

Simply resizing the image (without changing the dot resolution) is not enough, mainly because this information seems to be discarded very quickly with other hands in the pipeline (the image will be converted to JPG and sent via email).

Is there any other way to scale the image on iPhone?

+4
ios uikit image uiimage
May 18 '11 at 23:27
source share
4 answers

About the UIImage issue, this post provides many ways to handle a UIImage object. UIImage needs to fix some orientation problems. This and another post will be considered.




-(UIImage*)resizedImageToSize:(CGSize)dstSize { CGImageRef imgRef = self.CGImage; // the below values are regardless of orientation : for UIImages from Camera, width>height (landscape) CGSize srcSize = CGSizeMake(CGImageGetWidth(imgRef), CGImageGetHeight(imgRef)); // not equivalent to self.size (which is dependant on the imageOrientation)! /* Don't resize if we already meet the required destination size. */ if (CGSizeEqualToSize(srcSize, dstSize)) { return self; } CGFloat scaleRatio = dstSize.width / srcSize.width; // Handle orientation problem of UIImage UIImageOrientation orient = self.imageOrientation; CGAffineTransform transform = CGAffineTransformIdentity; switch(orient) { case UIImageOrientationUp: //EXIF = 1 transform = CGAffineTransformIdentity; break; case UIImageOrientationUpMirrored: //EXIF = 2 transform = CGAffineTransformMakeTranslation(srcSize.width, 0.0); transform = CGAffineTransformScale(transform, -1.0, 1.0); break; case UIImageOrientationDown: //EXIF = 3 transform = CGAffineTransformMakeTranslation(srcSize.width, srcSize.height); transform = CGAffineTransformRotate(transform, M_PI); break; case UIImageOrientationDownMirrored: //EXIF = 4 transform = CGAffineTransformMakeTranslation(0.0, srcSize.height); transform = CGAffineTransformScale(transform, 1.0, -1.0); break; case UIImageOrientationLeftMirrored: //EXIF = 5 dstSize = CGSizeMake(dstSize.height, dstSize.width); transform = CGAffineTransformMakeTranslation(srcSize.height, srcSize.width); transform = CGAffineTransformScale(transform, -1.0, 1.0); transform = CGAffineTransformRotate(transform, 3.0 * M_PI_2); break; case UIImageOrientationLeft: //EXIF = 6 dstSize = CGSizeMake(dstSize.height, dstSize.width); transform = CGAffineTransformMakeTranslation(0.0, srcSize.width); transform = CGAffineTransformRotate(transform, 3.0 * M_PI_2); break; case UIImageOrientationRightMirrored: //EXIF = 7 dstSize = CGSizeMake(dstSize.height, dstSize.width); transform = CGAffineTransformMakeScale(-1.0, 1.0); transform = CGAffineTransformRotate(transform, M_PI_2); break; case UIImageOrientationRight: //EXIF = 8 dstSize = CGSizeMake(dstSize.height, dstSize.width); transform = CGAffineTransformMakeTranslation(srcSize.height, 0.0); transform = CGAffineTransformRotate(transform, M_PI_2); break; default: [NSException raise:NSInternalInconsistencyException format:@"Invalid image orientation"]; } ///////////////////////////////////////////////////////////////////////////// // The actual resize: draw the image on a new context, applying a transform matrix UIGraphicsBeginImageContextWithOptions(dstSize, NO, self.scale); CGContextRef context = UIGraphicsGetCurrentContext(); if (!context) { return nil; } if (orient == UIImageOrientationRight || orient == UIImageOrientationLeft) { CGContextScaleCTM(context, -scaleRatio, scaleRatio); CGContextTranslateCTM(context, -srcSize.height, 0); } else { CGContextScaleCTM(context, scaleRatio, -scaleRatio); CGContextTranslateCTM(context, 0, -srcSize.height); } CGContextConcatCTM(context, transform); // we use srcSize (and not dstSize) as the size to specify is in user space (and we use the CTM to apply a scaleRatio) CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, srcSize.width, srcSize.height), imgRef); UIImage* resizedImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return resizedImage; } 
+2
May 19 '11 at a.m.
source share
— -

Quick expansion:

 extension UIImage{ // returns a scaled version of the image func imageScaledToSize(size : CGSize, isOpaque : Bool) -> UIImage{ // begin a context of the desired size UIGraphicsBeginImageContextWithOptions(size, isOpaque, 0.0) // draw image in the rect with zero origin and size of the context let imageRect = CGRect(origin: CGPointZero, size: size) self.drawInRect(imageRect) // get the scaled image, close the context and return the image let scaledImage = UIGraphicsGetImageFromCurrentImageContext() UIGraphicsEndImageContext() return scaledImage } } 

Example:

 aUIImageView.image = aUIImage.imageScaledToSize(aUIImageView.bounds.size, isOpaque : false) 

Set isOpaque to true if the image does not have alpha: the image will have better performance.

+4
Jun 17 '15 at 14:13
source share

I came up with this algorithm to create a half-sized image:

 - (UIImage*) halveImage:(UIImage*)sourceImage { // Compute the target size CGSize sourceSize = sourceImage.size; CGSize targetSize; targetSize.width = (int) (sourceSize.width / 2); targetSize.height = (int) (sourceSize.height / 2); // Access the source data bytes NSData* sourceData = (NSData*) CGDataProviderCopyData(CGImageGetDataProvider(sourceImage.CGImage)); unsigned char* sourceBytes = (unsigned char *)[sourceData bytes]; // Some info we'll need later CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(sourceImage.CGImage); int bitsPerComponent = CGImageGetBitsPerComponent(sourceImage.CGImage); int bitsPerPixel = CGImageGetBitsPerPixel(sourceImage.CGImage); int __attribute__((unused)) bytesPerPixel = bitsPerPixel / 8; int sourceBytesPerRow = CGImageGetBytesPerRow(sourceImage.CGImage); CGColorSpaceRef colorSpace = CGImageGetColorSpace(sourceImage.CGImage); assert(bytesPerPixel == 4); assert(bitsPerComponent == 8); // Bytes per row is (apparently) rounded to some boundary assert(sourceBytesPerRow >= ((int) sourceSize.width) * 4); assert([sourceData length] == ((int) sourceSize.height) * sourceBytesPerRow); // Allocate target data bytes int targetBytesPerRow = ((int) targetSize.width) * 4; // Algorigthm happier if bytes/row a multiple of 16 targetBytesPerRow = (targetBytesPerRow + 15) & 0xFFFFFFF0; int targetBytesSize = ((int) targetSize.height) * targetBytesPerRow; unsigned char* targetBytes = (unsigned char*) malloc(targetBytesSize); UIImage* targetImage = nil; // Copy source to target, averaging 4 pixels into 1 for (int row = 0; row &lt targetSize.height; row++) { unsigned char* sourceRowStart = sourceBytes + (2 * row * sourceBytesPerRow); unsigned char* targetRowStart = targetBytes + (row * targetBytesPerRow); for (int column = 0; column &lt targetSize.width; column++) { int sourceColumnOffset = 2 * column * 4; int targetColumnOffset = column * 4; unsigned char* sourcePixel = sourceRowStart + sourceColumnOffset; unsigned char* nextRowSourcePixel = sourcePixel + sourceBytesPerRow; unsigned char* targetPixel = targetRowStart + targetColumnOffset; uint32_t* sourceWord = (uint32_t*) sourcePixel; uint32_t* nextRowSourceWord = (uint32_t*) nextRowSourcePixel; uint32_t* targetWord = (uint32_t*) targetPixel; uint32_t sourceWord0 = sourceWord[0]; uint32_t sourceWord1 = sourceWord[1]; uint32_t sourceWord2 = nextRowSourceWord[0]; uint32_t sourceWord3 = nextRowSourceWord[1]; // This apparently bizarre sequence scales the data bytes by 4 so that when added together we'll get an average. We do lose the least significant bits this way, and thus about half a bit of resolution. sourceWord0 = (sourceWord0 & 0xFCFCFCFC) >> 2; sourceWord1 = (sourceWord1 & 0xFCFCFCFC) >> 2; sourceWord2 = (sourceWord2 & 0xFCFCFCFC) >> 2; sourceWord3 = (sourceWord3 & 0xFCFCFCFC) >> 2; uint32_t resultWord = sourceWord0 + sourceWord1 + sourceWord2 + sourceWord3; targetWord[0] = resultWord; } } // Convert the bits to an image. Supposedly CGCreateImage will dispose of the target bytes buffer. CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, targetBytes, targetBytesSize, NULL); CGImageRef targetRef = CGImageCreate(targetSize.width, targetSize.height, bitsPerComponent, bitsPerPixel, targetBytesPerRow, colorSpace, bitmapInfo, provider, NULL, FALSE, kCGRenderingIntentDefault); targetImage = [UIImage imageWithCGImage:targetRef]; // Clean up CGColorSpaceRelease(colorSpace); // Return result return targetImage; } 

I tried just to take any other pixel from every other row instead of averaging, but this led to the image being as bad as the default algorithm.

+1
May 19 '11 at 20:24
source share

I suppose you could use something like imagemagick . Apparently, it was successfully ported to the iPhone: http://www.imagemagick.org/discourse-server/viewtopic.php?t=14089

I have always been pleased with the quality of the images scalable by this library, so I think you will be satisfied with the result.

0
May 19 '11 at a.m.
source share



All Articles