Detail-Based Image Scaling

I am wondering if there are approaches or algorithms that can be used to reduce the image scale based on the number of details or entropy on the image, so the new size is defined as the resolution at which most of the details of the original image will be saved.

For example, if you shoot out of focus or a jittery image with the camera, there will be less detail or high-frequency content than if the camera took the image in focus or from a fixed position relative to the scene is displayed. The image size of the lower entropy can be significantly reduced and still support most of the details if you need to scale the image to its original size. However, in the case of a more detailed image, it would be impossible to reduce the image size without losing significant detail.

Of course, I understand that many lossy image formats, including JPEG, do something similar in the sense that the amount of data needed to store an image of a given resolution is proportional to the entropy of the image data, but I'm curious, mainly for my own of interest if there could be a computationally efficient approach for scaling resolution for image content.

+4
source share
1 answer

Perhaps it can be argued that most lossy image compression schemes from a DCT file in JPEG format to fractal compression essentially do this in their own way.

Please note that such methods almost always work on small fragments of the image, rather than on a large image, in order to maximize compression in narrower areas, and not limit the application of the same settings everywhere. The latter is likely to result in poor compression and / or high loss on most “real” images, which usually contain a mixture of levels of detail, although there are exceptions, such as your example out of focus.

You will need to determine what constitutes “most of the details of the original image,” since perfect restoration is only possible for fairly far-fetched images. And you will also need to specify the exact form of scaling, which will be used in any way, since this would have a significant impact on the quality of recovery. For example, a simple repetition of pixels is better to maintain hard edges, but to destroy smooth gradients, while linear interpolation should better reproduce gradients, but can damage the edges.

A simplification-based approach may consist of calculating a two-dimensional energy spectrum and choosing a scaling (possibly differently vertically and horizontally) that stores frequencies containing the “majority” of the content. Basically, this would be equivalent to choosing a low-pass filter that retains the “majority” of details. Whether such an approach can be considered "computationally efficient" may be a moot point ...

+2
source

All Articles