Simply put, to represent a number in decimal, we do something like this:
1550 = 1000 + 5·100 + 5·10 = 1·10³ + 5·10² + 5·10¹
To represent a binary number from a decimal number, you must change the bases:
106 = 2^6 + 2^5 + 2^3 + 2^1 = 0110 1010
Where 0 will be equal to 0 · 2 ^ 7, 0 · 2 ^ 4, etc.
As you can see, the pattern is clear: just sum the values of 2 as needed until you get the number you need, until you reach 0 (2 ^ 0) for number 1. If you work a little, the same goes for decimals, you just continue subtract 1 from the authority:
0.5 = 2^-1 = 1/2^1
Now for the other numbers:
0.6875 = 0.1011 = 1/2^1 + 1/2^3 + 1/2^4
However, there are numbers that you cannot imagine as sums of inverse powers of two, something as simple as 0.1 should require you to add very small numbers to approximate this value and it will look like 0.0001100110011... which actually is rounded to 0.0999755859375... , and since the language must change the bases every time you have to do arithmetic, these problems begin to appear.
arielnmz
source share