How to erase part of an image as the user touches it

My main goal of the image is to have a gray box above the image, and then when the user rubs into that gray box, it shows the image below. Basically, like a lottery card. I did a bunch of searches in the docs as well as on this site, but can't find a solution.

The following is just a proof of concept for checking the โ€œerasureโ€ of an image based on where the user touches, but he doesnโ€™t work. :(

I have a UIView that detects touches, then sends the coordinates of the move to the UIViewController, which pinned the image into a UIImageView by doing the following:

- (void) moveDetectedFrom:(CGPoint) from to:(CGPoint) to { UIImage* image = bkgdImageView.image; CGSize s = image.size; UIGraphicsBeginImageContext(s); CGContextRef g = UIGraphicsGetCurrentContext(); CGContextMoveToPoint(g, from.x, from.y); CGContextAddLineToPoint(g, to.x, to.y); CGContextClosePath(g); CGContextAddRect(g, CGRectMake(0, 0, s.width, s.height)); CGContextEOClip(g); [image drawAtPoint:CGPointZero]; bkgdImageView.image = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); [bkgdImageView setNeedsDisplay]; } 

The problem is that the strokes sent to this method are just fine, but nothing happens on the original.

Am I doing the wrong clip? Or?

Not quite sure ... so any help you may have would be greatly appreciated.

Thanks in advance, Joel

+4
source share
2 answers

I tried to do the same a long time ago using only Core Graphics, and this can be done, but believe me, the effect is not as smooth and soft as the user expects. So, I knew how to work with OpenCV, (Open Computer Vision Library), and since it was written in C, I knew that I could use it on an iPhone. Doing what you want to do with OpenCV is very simple. First you need a couple of functions to convert UIImage to IplImage, which is the type used by OpenCV to represent all kinds of images, and in a different way.

 + (IplImage *)CreateIplImageFromUIImage:(UIImage *)image { CGImageRef imageRef = image.CGImage; //This is the function you use to convert a UIImage -> IplImage CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); IplImage *iplimage = cvCreateImage(cvSize(image.size.width, image.size.height), IPL_DEPTH_8U, 4); CGContextRef contextRef = CGBitmapContextCreate(iplimage->imageData, iplimage->width, iplimage->height, iplimage->depth, iplimage->widthStep, colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault); CGContextDrawImage(contextRef, CGRectMake(0, 0, image.size.width, image.size.height), imageRef); CGContextRelease(contextRef); CGColorSpaceRelease(colorSpace); return iplimage;} + (UIImage *)UIImageFromIplImage:(IplImage *)image { //Convert a IplImage -> UIImage CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); NSData * data = [[NSData alloc] initWithBytes:image->imageData length:image->imageSize]; //NSData *data = [NSData dataWithBytes:image->imageData length:image->imageSize]; CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data); CGImageRef imageRef = CGImageCreate(image->width, image->height, image->depth, image->depth * image->nChannels, image->widthStep, colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault, provider, NULL, false, kCGRenderingIntentDefault); UIImage *ret = [[UIImage alloc] initWithCGImage:imageRef]; CGImageRelease(imageRef); CGDataProviderRelease(provider); CGColorSpaceRelease(colorSpace); [data release]; return ret;} 

Now that you have both the basic functions that you need, you can do whatever you want with IplImage: this is what you want:

 +(UIImage *)erasePointinUIImage:(IplImage *)image :(CGPoint)point :(int)r{ //r is the radious of the erasing int a = point.x; int b = point.y; int position; int minX,minY,maxX,maxY; minX = (ar>0)?ar:0; minY = (br>0)?br:0; maxX = ((a+r) < (image->width))? a+r : (image->width); maxY = ((b+r) < (image->height))? b+r : (image->height); for (int i = minX; i < maxX ; i++) { for(int j=minY; j<maxY;j++) { position = ((jb)*(jb))+((ia)*(ia)); if (position <= r*r) { uchar* ptr =(uchar*)(image->imageData) + (j*image->widthStep + i*image->nChannels); ptr[1] = ptr[2] = ptr[3] = ptr[4] = 0; } } } UIImage * res = [self UIImageFromIplImage:image]; return res;} 

Sorry for the formatting.

If you want to know how to port OpenCV to iPhone Yoshimasa Niwa

If you want to test the application currently working with OpenCV in the AppStore, you will receive: Flags and Symbols

+1
source

Usually you want to draw the current graphics context inside the drawRect: method, and not just any old method. In addition, the area of โ€‹โ€‹the clip affects only what is drawn in the current graphics context. But instead of understanding why this approach does not work, I propose to do it differently.

What I would do are two kinds. One with an image and one with a gray color that is made transparent. This allows the graphics hardware to cache the image, rather than trying to redraw the image every time you change the gray fill.

Gray will be a subclass of UIView with a CGBitmapContext that you would draw to make the pixels that the user touched.

There are probably several ways to do this. I just suggest one of the ways above.

+1
source

Source: https://habr.com/ru/post/1311922/


All Articles