>> import decimal >>...">

Why does "decimal.Decimal ('0') <1.0" give False in Python 2.6.5

In Python 2.6.5, the following expression gives False:

>>> import decimal
>>> decimal.Decimal('0') < 1.0
False

Is there a rationale explaining why comparing Decimal with float should behave like this?

+5
source share
1 answer

From the documentation of the decimal module :

Changed in version 2.7: Comparison between an instance of float x and a Decimal instance of y now returns a result based on the values ​​of x and y. In earlier versions, x <y returned the same (arbitrary) result for any Decimal instance x and any float instance y.

, , / , , , .

+13

All Articles