Given the behavior of BigDecimal(double)
, in my opinion, I'm not sure if this will really be such a problem.
I would definitely not agree with the wording of the documentation in the constructor of BigDecimal(double)
:
The results of this constructor may be somewhat unpredictable . One might suppose that writing new BigDecimal(0.1)
in Java creates a BigDecimal
that is exactly 0.1
(unscaled value 1
with a scale of 1
), but in fact it is 0.1000000000000000055511151231257827021181583404541015625
.
(Emphasis added.)
Instead of speaking unpredictably, I think the wording should be unexpected , and even then this would be an unexpected behavior for those who do not know the limitations of representing decimal numbers with a floating point value .
As long as you remember that floating point values ββcannot represent all decimal values ββwith precision, the value returned with BigDecimal(0.1)
, which is 0.1000000000000000055511151231257827021181583404541015625
, really makes sense.
If the BigDecimal
object created by the BigDecimal(double)
constructor is consistent, then I would say that the result is predictable.
My guess about why the BigDecimal(double)
constructor does not become obsolete is that the behavior can be considered correct, and as long as someone knows how floating point representations work, the constructor behavior is not too surprising.
coobird Jun 29 '09 at 11:54 2009-06-29 11:54
source share