Perhaps it can be argued that most lossy image compression schemes from a DCT file in JPEG format to fractal compression essentially do this in their own way.
Please note that such methods almost always work on small fragments of the image, rather than on a large image, in order to maximize compression in narrower areas, and not limit the application of the same settings everywhere. The latter is likely to result in poor compression and / or high loss on most “real” images, which usually contain a mixture of levels of detail, although there are exceptions, such as your example out of focus.
You will need to determine what constitutes “most of the details of the original image,” since perfect restoration is only possible for fairly far-fetched images. And you will also need to specify the exact form of scaling, which will be used in any way, since this would have a significant impact on the quality of recovery. For example, a simple repetition of pixels is better to maintain hard edges, but to destroy smooth gradients, while linear interpolation should better reproduce gradients, but can damage the edges.
A simplification-based approach may consist of calculating a two-dimensional energy spectrum and choosing a scaling (possibly differently vertically and horizontally) that stores frequencies containing the “majority” of the content. Basically, this would be equivalent to choosing a low-pass filter that retains the “majority” of details. Whether such an approach can be considered "computationally efficient" may be a moot point ...
source share