Please note that this is a problem of understanding floating point numbers (fixed length). Most languages do exactly (or very similar to) what Python does.
Python float IEEE 754 64-bit binary floating-point. It is limited to 53 bits of precision, that is, slightly less than 16 decimal digits of accuracy. 19.9999999999999999 contains 18 decimal digits; it cannot be represented exactly as a float . float("19.9999999999999999") displays the closest floating point value, which is the same as float("20.0") .
>>> float("19.9999999999999999") == float("20.0") True
If by “many decimals” you mean “many digits after the decimal point”, remember that the same “strange” results happen when there are many decimal digits before the decimal point:
>>> float("199999999999999999") 2e+17
If you need full float precision, don't use str (), use the repr () function:
>>> x = 1. / 3. >>> str(x) '0.333333333333' >>> str(x).count('3') 12 >>> repr(x) '0.3333333333333333' >>> repr(x).count('3') 16 >>>
Refresh It is interesting how often decimal prescribed as a treatment for the surprise caused by the floating point. This is often accompanied by simple examples like 0.1 + 0.1 + 0.1 != 0.3 . Nobody dwells on the fact that decimal has its share of disadvantages, for example
>>> (1.0 / 3.0) * 3.0 1.0 >>> (Decimal('1.0') / Decimal('3.0')) * Decimal('3.0') Decimal('0.9999999999999999999999999999') >>>
True, float limited to 53 binary precision digits. By default, decimal limited to 28 decimal digits of precision.
>>> Decimal(2) / Decimal(3) Decimal('0.6666666666666666666666666667') >>>
You can change the limit, but it is still limited by accuracy. You still need to know the characteristics of the number format in order to use it efficiently without “astounding” results, and the extra precision is bought by slow work (unless you use the third-party cdecimal module).