How to speed up libjpeg decompression

We use libjpeg to decode JPEG on our small embedded platform. We have speed issues when we decode large images. For example, an image of 20 MB in size and 5000x3000 pixels in size takes 10 seconds to download.

I need some tips on how to improve decoding speed. On another platform with similar performance, I have the same image download in two seconds.

The best reduction from 14 to 10 seconds that we got from using a larger read buffer (64 KB instead of the standard 4 KB). But nothing else helped.

We do not need to display the image in full resolution, so we use scale_num and scale_denom to display its smaller size. But I would like to have more performance. Is it possible to use multithreading, etc.? Different decoding settings? Everything, I ran ideas.

+4
source share
3 answers

First up is the code profile. You have a little more left than the assumption if you cannot definitively identify bottlenecks.

libjpeg. scale_num scale_denom. dct_method? , DCT_FASTEST . : do_fancy_upsampling, do_block_smoothing, dither_mode, two_pass_quantize .. , , libjpeg ..

, -, . -, , . , RAM, , . ? , , -, . USB ( SD ..) . ( , , .) ( SPI, ..).

- (.. NAND), -, . NAND? , ? , .. , / , .

, , , stackoverflow : jpeglib-turbo implmentation / < 100?

+4

, . . , .

, , , , , , SIMD , FPU.

. 10 , ? , , , , .

, , - , , . , .

+2

Take a look at libjpeg-turbo . If you support hardware, then it is usually 2-4 times faster than libjpeg on one CPU. Tipical 12MB jpeg is decoded in less than 2 seconds on Pandaboard. You can also take a look at the speed of analysis of various JPEG decoders here .

+2
source

All Articles