I have a built-in application in which an image scanner sends a stream of 16-bit pixels, which are later collected in a grayscale image. Since I need to both store this data locally and redirect it to the network interface, I would like to compress the data stream to reduce the required storage space and network bandwidth.
Is there a simple algorithm that I can use for without compressing pixel data?
At first I thought about calculating the difference between two consecutive pixels, and then coded this difference with the Huffman code. Unfortunately, pixels are unsigned 16-bit values, so the difference can be anywhere in the range -65535 .. +65535, which leads to potentially huge codeword lengths. If several really long codewords occur in a string, I will run into buffer overflow problems.
Update: my platform is FPGA
source
share