IOS Capture, then crop and mask the result image

In my application, I want to take the following steps:

1 - Screen capture, this part is not a problem for me, I use the following code:

- (UIImage *)captureScreen { UIGraphicsBeginImageContextWithOptions(self.view.frame.size, YES, 0.0f); [self.view.layer renderInContext:UIGraphicsGetCurrentContext()]; UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return image; } 

2 - I cropped the image using this function

 - (UIImage *)cropImage(UIImage *)image inRect:(CGRect)rect { CGImageRef imageRef = CGImageCreateWithImageInRect(image.CGImage, rect); UIImage *resultImage = [UIImage imageWithCGImage:imageRef]; CGImageRelease(imageRef); return resultImage; } 

3 - Then I mask the cropped image with a black and white mask

 - (UIImage *)maskImage:(UIImage *)image withMask:(UIImage *)maskImage { CGImageRef maskRef = maskImage.CGImage; CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef), CGImageGetHeight(maskRef), CGImageGetBitsPerComponent(maskRef), CGImageGetBitsPerPixel(maskRef), CGImageGetBytesPerRow(maskRef), CGImageGetDataProvider(maskRef), NULL, false); CGImageRef maskedRef = CGImageCreateWithMask([image CGImage], mask); UIImage *resultImage = [UIImage imageWithCGImage:maskedRef]; CGImageRelease(mask); CGImageRelease(maskedRef); return resultImage; } 

However, the result obtained by me is that outside the form of the mask, the image is in black instead of transparent. Can anybody help me?

+4
source share
2 answers

I solve my problem, this is due to the fact that the alpha channel of the image is masked. Therefore, before disguising, I create another UIImage with an alpha channel and continue my steps.

This is the code for creating UIImage with alpha

 - (UIImage *)imageWithAlpha { CGImageRef imageRef = self.CGImage; CGFloat width = CGImageGetWidth(imageRef); CGFloat height = CGImageGetHeight(imageRef); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef context = CGBitmapContextCreate(nil, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedFirst); CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef); CGImageRef resultImageRef = CGBitmapContextCreateImage(context); UIImage *resultImage = [UIImage imageWithCGImage:resultImageRef scale:self.scale orientation:self.imageOrientation]; CGContextRelease(context); CGColorSpaceRelease(colorSpace); CGImageRelease(resultImageRef); return resultImage; } 
+2
source

This works for me. Hope this works for you too.

 - (UIImage*) doImageMask:(UIImage *)mainImage:(UIImage*)maskImage{ CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGImageRef maskImageRef = [maskImage CGImage]; // create a bitmap graphics context the size of the image CGContextRef mainViewContentContext = CGBitmapContextCreate (NULL, maskImage.size.width, maskImage.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast); if (mainViewContentContext == NULL){ return NULL; } CGFloat ratio = 0; ratio = maskImage.size.width/ mainImage.size.width; if(ratio * mainImage.size.height < maskImage.size.height) { ratio = maskImage.size.height/ mainImage.size.height; } CGRect rect1 = {{0, 0}, {maskImage.size.width, maskImage.size.height}}; CGRect rect2 = {{-((mainImage.size.width*ratio)-maskImage.size.width)/2 , -((mainImage.size.height*ratio)-maskImage.size.height)/2}, {mainImage.size.width*ratio, mainImage.size.height*ratio}}; CGContextClipToMask(mainViewContentContext, rect1, maskImageRef); CGContextDrawImage(mainViewContentContext, rect2, mainImage.CGImage); // Create CGImageRef of the main view bitmap content, and then // release that bitmap context CGImageRef newImage = CGBitmapContextCreateImage(mainViewContentContext); CGContextRelease(mainViewContentContext); UIImage *theImage = [UIImage imageWithCGImage:newImage]; CGImageRelease(newImage); // return the image return theImage; } 
+2
source

Source: https://habr.com/ru/post/1416293/


All Articles