Python: reading a 12-bit binary image

I have a 12 bit packed image from a GigE camera. This is a little-endian file, and every 3 bytes contains 2 12-bit pixels. I am trying to read this image using python, and I tried something like this:

import bitstring import numpy with open('12bitpacked1.bin', 'rb') as f: data = f.read() ii=numpy.zeros(2*len(data)/3) ic = 0 for oo in range(0,len(data)/3): aa = bitstring.Bits(bytes=data[oo:oo+3], length=24) ii[ic],ii[ic+1] = aa.unpack('uint:12,uint:12') ic=ic+2 b = numpy.reshape(ii,(484,644)) 

In short: I read 3 bytes, convert them to bits, and decompress them as two 12-bit integers.

The result, however, is very different from what it should be. It seems that the image is divided into four quarters, each of which is expanded to the full size of the image and then overlaps.

What am I doing wrong here?

Update: Here are the test files:

12-bit packaging

12-bit normal

They will not be the same, but they should show the same image. 12-bit normal has a 12-bit pixel like uint16.

 with open('12bit1.bin', 'rb') as f: a = numpy.fromfile(f, dtype=numpy.uint16) b = numpy.reshape(a,(484,644)) 
+5
source share
1 answer

Using this code

 for oo in range(0,len(data)/3): aa = bitstring.Bits(bytes=data[oo:oo+3], length=24) 

you read bytes data[0:3] , data[1:4] , ... What you really want is perhaps this:

 for oo in range(0,len(data)/3): aa = bitstring.Bits(bytes=data[3*oo:3*oo+3], length=24) 

[EDIT] Even more compact would be the following:

 for oo in range(0,len(data)-2,3): aa = bitstring.Bits(bytes=data[oo:oo+3], length=24) 
+3
source

All Articles