Lossless image compression

I have a built-in application in which an image scanner sends a stream of 16-bit pixels, which are later collected in a grayscale image. Since I need to both store this data locally and redirect it to the network interface, I would like to compress the data stream to reduce the required storage space and network bandwidth.

Is there a simple algorithm that I can use for without compressing pixel data?

At first I thought about calculating the difference between two consecutive pixels, and then coded this difference with the Huffman code. Unfortunately, pixels are unsigned 16-bit values, so the difference can be anywhere in the range -65535 .. +65535, which leads to potentially huge codeword lengths. If several really long codewords occur in a string, I will run into buffer overflow problems.

Update: my platform is FPGA

+5
source share
6 answers

PNG . PNG zlib . libpng. , .

+8

?

zlib gzip? - LZ77 LZ88.

+3

. , , / PNG. / , , , ( , ).

+3

, , . , , , , , .

, , , . , X, O:

.. ...
..OX

, B X :

OO... B < -
X < -

Os.

+2

" "? , /, +/- 64K, , 8 .

, .

, , , "N'", .

+1

A good LZ77 / RLE hybrid with bells and wwhistles can get great compression that decompresses pretty quickly. They will also be larger compressors-badders on small files due to lack of overhead resources. For a good, but GPLd of this, check PUCrunch

+1
source

All Articles