No, what you see really makes sense.
Think of one (red, for example) meaning. It dictates the amount of redness in a pixel and, like an 8-bit amount, somewhere between 0 and 255. So you can imagine all the redness values ββin a range.
If you just reloaded it by eight bits (or multiplied by 256) to get a 16-bit color value, you will get a brief 256 from somewhere between 0 and 255 * 256 (65280) inclusive.
Although this scales the redness relatively well, it does not properly distribute it over the entire 16-bit range.
For example, 255 in the 8-bit range means maximum redness. Simple multiplication by 256 does not give you the maximum amount of redness on a 16-bit scale, which would be 65535.
Multiplying by 256, and then adding the original (effectively multiplying by 257), it is correctly distributed over the range 0..65535.
The same as scaling single-valued integers 0..9
in the range 0..99
. Multiplying by ten is one way, but the best way is to multiply by ten and add the original value (or multiply by eleven):
nn*10 n*10+n - ---- ------ 0 0 0 1 10 11 2 20 22 3 30 33 4 40 44 5 50 55 6 60 66 7 70 77 8 80 88 9 90 99
source share