How to deal with memory limitations in iOS 8 Photo Extensions?

I added a new iOS 8 extension for my existing photo editing application. My application has a rather complicated filter pipeline and requires the simultaneous storage of several textures in memory. However, on devices with 1 GB of RAM, I can easily process 8-megapixel images.

The extension, however, has much higher memory limits. I had to reduce the image to 2 MP in order to process it without failing the extension. I also realized that memory problems occurred only when there was no debugger connected to the extension. Everything works fine with him.

I did some experiments. I changed the application for testing the memory budget for working in the extension and came up with the following results (showing the amount of RAM in MB that can be allocated before the failure):

╔═══════════════════════╦═════╦═══════════╦══════════════════╗ β•‘ Device β•‘ App β•‘ Extension β•‘ Ext. (+Debugger) β•‘ ╠═══════════════════════╬═════╬═══════════╬══════════════════╣ β•‘ iPhone 6 Plus (8.0.2) β•‘ 646 β•‘ 115 β•‘ 645 β•‘ β•‘ iPhone 5 (8.1 beta 2) β•‘ 647 β•‘ 97 β•‘ 646 β•‘ β•‘ iPhone 4s (8.0.2) β•‘ 305 β•‘ 97 β•‘ 246 β•‘ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β• 

A few observations:

  • When you connect a debugger, the extension behaves like a "normal" application
  • Despite the fact that 4s has only half the total memory capacity (512 MB) compared to other devices, it receives the same ~ 100 MB from the system for expansion.

Now my question is: how should I work with this small amount of memory in the Photo Editing extension? One texture containing 8 megapixels (camera resolution) RGBA image eats ~ 31 MB. What is the point of this extension mechanism if I have to tell the user that full-size editing is only possible using the main application?

Has one of you reached this barrier too? Have you found a solution to circumvent this limitation?

+7
memory ios8 ios8-extension
source share
3 answers

I am developing a photo editing extension for my company, and we are facing the same problem. In order to apply some effects to an image, our internal image processing engine requires more than 150 MB. And this is not even counting panoramic images, which occupy about 100 MB of memory per copy.

We found only two workarounds, but not the actual solution.

  • Image reduction and filtering. This will require less memory, but the image result is terrible. At least the extension will not work.

or

  1. For image processing, use CoreImage or Metal. As we have analyzed An example of an extension for editing photos from Apple that uses CoreImage can process very large images and even panoramas without loss of quality or resolution. In fact, we were not able to collapse the extension by loading very large images. The sample code can handle panoramas with a viewing memory of 40 MB, which is quite impressive.

According to the Apple Application Programming Extension Programming Guide , p. 55, chapter β€œProcessing memory restrictions”, a solution to pressure memory in extensions to view your image processing code. So far, we have ported our image processing engine to CoreImage, and the results are much better than the previous one.

Hope I can help a bit. Marco Paiva

+3
source share

If you use the Core Image recipe, you don’t need to worry about memory at all, as Marco said. No image to which the main image filters are applied is displayed until the image object is returned to the view.

This means that you can apply a million filters to photos the size of a billboard on the highway, and memory will not be a problem. The filter specifications will simply be compiled into a convolution or kernel, all of which will have the same value - no matter what.

Misunderstandings regarding memory management and overflow, etc. can be easily eliminated, focusing on the basic concepts of the selected programming language, development environment and hardware platform.

Apple documentation is sufficient for this, which describes programming the Core Image filter; if you want specific links to parts of the documentation that I think are specific to your problems, just ask.

0
source share

Here's how you apply two consecutive convolution kernels in Core Image with an β€œintermediate result” between them:

 - (CIImage *)outputImage { const double g = self.inputIntensity.doubleValue; const CGFloat weights_v[] = { -1*g, 0*g, 1*g, -1*g, 0*g, 1*g, -1*g, 0*g, 1*g}; CIImage *result = [CIFilter filterWithName:@"CIConvolution3X3" keysAndValues: @"inputImage", self.inputImage, @"inputWeights", [CIVector vectorWithValues:weights_v count:9], @"inputBias", [NSNumber numberWithFloat:1.0], nil].outputImage; CGRect rect = [self.inputImage extent]; rect.origin = CGPointZero; CGRect cropRectLeft = CGRectMake(0, 0, rect.size.width, rect.size.height); CIVector *cropRect = [CIVector vectorWithX:rect.origin.x Y:rect.origin.y Z:rect.size.width W:rect.size.height]; result = [result imageByCroppingToRect:cropRectLeft]; result = [CIFilter filterWithName:@"CICrop" keysAndValues:@"inputImage", result, @"inputRectangle", cropRect, nil].outputImage; const CGFloat weights_h[] = {-1*g, -1*g, -1*g, 0*g, 0*g, 0*g, 1*g, 1*g, 1*g}; result = [CIFilter filterWithName:@"CIConvolution3X3" keysAndValues: @"inputImage", result, @"inputWeights", [CIVector vectorWithValues:weights_h count:9], @"inputBias", [NSNumber numberWithFloat:1.0], nil].outputImage; result = [result imageByCroppingToRect:cropRectLeft]; result = [CIFilter filterWithName:@"CICrop" keysAndValues:@"inputImage", result, @"inputRectangle", cropRect, nil].outputImage; result = [CIFilter filterWithName:@"CIColorInvert" keysAndValues:kCIInputImageKey, result, nil].outputImage; return result; 

}

0
source share

All Articles