Let us explain it again. After the integer part has been printed (accurately) without any rounding except the cut in the 0 direction, this is the time for decimal bits.
Start with a byte string (e.g. 100 for starters) containing binary zeros. If the first bit to the right of the decimal point in the fp value is set, it means that 0.5 (2 ^ -1 or 1 / (2 ^ 1) is a fraction component. Therefore, add 5 to the first byte. The next bit is set to 0.25 (2 ^ - 2 or 1 / (2 ^ 2)) is part of the fraction add 5 to the second byte and add 2 to the first (oh, don't forget about the transfer, they happen - lower school math). The next bit sets the value to 0.125, so add 5 to the third byte, 2 to the second and 1 to the first. And so on:
value string of binary 0s start 0 0000000000000000000 ... bit 1 0.5 5000000000000000000 ... bit 2 0.25 7500000000000000000 ... bit 3 0.125 8750000000000000000 ... bit 4 0.0625 9375000000000000000 ... bit 5 0.03125 9687500000000000000 ... bit 6 0.015625 9843750000000000000 ... bit 7 0.0078125 9921875000000000000 ... bit 8 0.00390625 9960937500000000000 ... bit 9 0.001953125 9980468750000000000 ... ...
I did it manually so that maybe I missed something, but implementing this in code is trivial.
Thus, for all those SO who cannot get the exact result using float, people who don't know what they are talking about are proof that the values of the floating-point fractions are perfectly accurate. Painfully accurate. But binary.
For those who take the time to understand how this works, the best accuracy is within reach. As for the others ... well, I think they will continue to not look at the backgrounds to answer the question that has been answered repeatedly earlier, honestly believing that they have found a “broken floating point” (or whatever it is called) and Every day publishes a new version of the same issue.
"Close to magic", "dark spell" - it's fun!
source share