Actually, all this makes sense.
Since 0.8 cannot be accurately represented by any series 1 / 2 ** x for different x , it must be represented approximately, and it happens that it is slightly less than 10015.8.
So, when you simply print it, it is rounded reasonably.
When you convert it to an integer without adding 0.5 , it trims .79999999 ... to .7
When you type 10001580.0, well, this is an accurate representation in all formats, including float and double. Thus, you do not see a truncation of the value ever smaller than the next holistic step.
A floating point is not inaccurate, it just has limitations on what can be represented. Yes, FP is absolutely accurate, but it cannot be a number that we can easily enter using base 10. (Update / clarification: well, ironically, it can represent exactly every integer, since every integer has 2 ** x , but “each fraction” is another story. Only certain decimal fractions can be accurately compiled using the 1/2**x series.)
In fact, JavaScript implementations use floating point storage and arithmetic for all numeric values. This is due to the fact that the FP hardware gives accurate results for integers, so this led to the fact that 52-bit math JS guys used existing equipment (at that time) almost completely 32-bit machines.
Digitaloss
source share