If you have a double choice and you normalize them between 0.0 and 1.0 , there are a number of sources of accuracy loss. However, they are all much smaller than you suspect.
First, you will lose some precision in the arithmetic operations necessary to normalize them as you round. This is relatively small - a bit or so per operation - and usually relatively random.
Secondly, the exponent component will no longer use the opportunity of a positive exponent.
Thirdly, since all values are positive, the sign bit will also be wasted.
Forth, if the input space does not contain + inf or -inf or + NaN or -NaN or the like, these code points will also be wasted.
But, for the most part, you will spend about 3 bits of information in a 64-bit double in your normalization, one of which is almost inevitable when you are dealing with finite bit-widths.
Any 64-bit representation of fixed points of values from 0 to 1 will have a much smaller "range" than double s. A double can represent something of the order of 10^-300 , while a 64-bit fixed-point representation including 1.0 can only take the value 10^-19 or so. (A 64-bit fixed-point representation can represent 1 - 10^-19 as different from 1 , whereas a double cannot, but a 64-bit fixed-point value cannot represent anything less than 2^-64 , whereas a double can) .
Some of the numbers above are approximate and may depend on rounding / exact format.
Yakk
source share