Why is it so slowly decreasing UIImage from the camera?

resize the camera. The UIImage returned by the UIImagePickerController takes a ridiculously long time if you do it in the usual way, as in this post .

[update: last call for creative ideas here! my next option is to ask Apple, I think.]

Yes, itโ€™s a lot of pixels, but the graphic equipment on the iPhone is perfectly capable of drawing a lot of textured squares in size of 1024x1024 on the screen in 1/60 second, so there really should be a way to resize a 2048x1536 image to 640x480 in a lot less than 1.5 seconds.

So why is it so slow? Is the basic image data that the OS returns from the collector somehow not ready to draw, so it needs to be pumped up in some way that the GPU cannot handle?

My best guess: it needs to be converted from RGBA to ABGR or something like that; can anyone think about how to convince the system to quickly provide me with data, even if it is not in the correct format, and I will deal with it later?

As far as I know, the iPhone does not have a dedicated โ€œgraphicโ€ memory, so there should be no question of moving image data from one place to another.

So the question is: is there any alternative drawing method besides using CGBitmapContextCreate and CGContextDrawImage that takes more advantage of the GPU?

Something to explore: if I start with a UIImage of the same size that is not using the image picker, is it also slow? Apparently not ...

Update: Matt Long discovered that it took only 30 ms to resize the image you are returning from the collector in [info objectForKey:@"UIImagePickerControllerEditedImage"] if you enabled cropping using the manual camera controls. This is not useful for the case where I care about where I use takePicture to take pictures programmatically. I see that the edited image is kCGImageAlphaPremultipliedFirst , but the original image is kCGImageAlphaNoneSkipFirst .

Further update: Jason Crawford proposed CGContextSetInterpolationQuality(context, kCGInterpolationLow) , which actually reduces the time from 1.5 s to 1.3 s at a cost in image quality - but this is still far from the speed that the GPU should be capable of!

Last update before the end of the week : user refulgentis did some profiling, which seems to indicate that 1.5 seconds was spent writing the captured camera image to a JPEG disc, and then reading it in. If true, itโ€™s very strange.

+4
source share
5 answers

Use Shark profile, find out what takes so long.

I need to work a lot with MediaPlayer.framework, and when you get properties for songs on iPod, the first property request is insanely slow compared to subsequent requests, because in the first property request MobileMediaPlayer packs the dictionary with all the properties and passes it to my application.

I bet there is a similar situation here.

EDIT: I was able to make a temporary profile in Shark of both the Matt Long UIImagePickerControllerEditedImage situation and the UIImagePickerControllerOriginalImage general situation.

In both cases, most of the time is taken by CGContextDrawImage. In the case of Matt Long, the UIImagePickerController will take care of this between the user who captured the image and the image entering editing mode.

Scaling the percentage of time spent on CGContextDrawImage = 100%, CGContextDelegateDrawImage then takes 100%, then ripc_DrawImage (from libRIP.A.dylib) takes 100%, and then ripc_AcquireImage (which is like unpacking JPEG and takes most of the time in _g_g_g_g_g_c_c_ct_c unpack_one, sep_upsample) takes 93% of the time. Only 7% of the time actually spent on ripc_RenderImage, which I consider the actual drawing.

+2
source

You seem to have made a few assumptions here, which may or may not be true. My experience is different from yours. This method seems to take only 20-30 ms on my 3G networks when scaling a shot taken from the camera to 0.31 of the original size with a call

 CGImageRef scaled = CreateScaledCGImageFromCGImage([image CGImage], 0.31); 

(I get 0.31, taking the width scale, 640.0 / 2048.0, by the way)

I checked that the image was the same size that you are working with. Here is my NSLog output:

 2009-12-07 16:32:12.941 ImagePickerThing[8709:207] Info: { UIImagePickerControllerCropRect = NSRect: {{0, 0}, {2048, 1536}}; UIImagePickerControllerEditedImage = <UIImage: 0x16c1e0>; UIImagePickerControllerMediaType = "public.image"; UIImagePickerControllerOriginalImage = <UIImage: 0x184ca0>; } 

I am not sure why the difference is, and I cannot answer your question, since it is related to the GPU, however I would consider 1.5 seconds and 30 ms a very significant difference. Maybe compare the code on this blog with what you use?

Regards.

+3
source

I had the same problem and hit my head for a long time. As far as I can tell, when you first access the UIImage returned by the image picker, it just slows down. As an experiment, try to synchronize any two operations with UIImage - for example, your scaling, and then UIImageJPEGR representation or something else. Then switch the order. When I did this in the past, the first operation gets a fine for the time. My best hypothesis is that the memory is still on the CCD, and moving it to main memory for anything with it is slow.

When you set allowImageEditing = YES, the image you return is resized and cropped to about 320x320. This makes it faster, but it is probably not what you want.

The best acceleration I've found is:

 CGContextSetInterpolationQuality(context, kCGInterpolationLow) 

in the context that you are returning from CGBitmapContextCreate before you execute CGContextDrawImage.

The problem is that your thumbnails may not look so good. However, if you reduce the scale factor - for example, from 1600x1200 to 800x600 - then it looks fine.

+2
source

Here is the git project I used and it works well. Usage is also pretty clean - one line of code.

https://github.com/AliSoftware/UIImage-Resize

+1
source

DO NOT USE CGBitmapImageContextCreate in this case! I spent almost a week in the same situation you are in. Performance will be absolutely terrible, and it will absorb memory like crazy. Use UIGraphicsBeginImageContext instead:

 // create a new CGImage of the desired size UIGraphicsBeginImageContext(desiredImageSize); CGContextRef c = UIGraphicsGetCurrentContext(); // clear the new image CGContextClearRect(c, CGRectMake(0,0,desiredImageSize.width, desiredImageSize.height)); // draw in the image CGContextDrawImage(c, rect, [image CGImage]); // return the result to our parent controller UIImage * result = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); 

In the above example (from my own image resizing code), "rect" is significantly smaller than the image. The above code is very fast and should do exactly what you need.

I'm not quite sure why UIGraphicsBeginImageContext is much faster, but I think it has something to do with memory allocation. I noticed that this approach requires significantly less memory, implying that the OS has already allocated space for the image context somewhere.

-one
source

All Articles