New BigDecimal (double) versus new BigDecimal (String)

When BigDecimal used with double input and BigDecimal with String input, different results appear.

 BigDecimal a = new BigDecimal(0.333333333); BigDecimal b = new BigDecimal(0.666666666); BigDecimal c = new BigDecimal("0.333333333"); BigDecimal d = new BigDecimal("0.666666666"); BigDecimal x = a.multiply(b); BigDecimal y = c.multiply(d); System.out.println(x); System.out.println(y); 

x is output as

 0.222222221777777790569747304508155316795087227497352441864147715340493949298661391367204487323760986328125 

and y is

 0.222222221777777778 

Am I mistaken in saying that this is due to a double inaccuracy? But since this is BigDecimal , shouldn't it be the same?

+4
source share
4 answers

Am I mistaken in saying that this is due to a double inaccuracy?

You are absolutely right, this is precisely because of double inaccuracies.

But since this is BigDecimal , should it not be the same?

No, it should not. The error has been introduced since the creation of new BigDecimal(0.333333333) , because the constant 0.333333333 already has an error built into it. At this moment, you can’t fix anything to correct this misconception: the proverbial horse was coming out of the barn by that time, so it’s too late to close the doors.

When you pass String , on the other hand, the decimal representation exactly matches the string, so you get a different result.

+10
source

Yes, this is a floating point error. The problem is that the literals 0.333333333 and 0.666666666 represented as doubles before being passed as an argument to BigDecimal . In particular, the BigDecimal constructor takes a double argument.

This is supported by the standard, which states that floating-point characters are double by default, unless otherwise specified .

+6
source

Java docs have their answer. According to Java BigDecimal docs (double val)

The results of this constructor may be somewhat unpredictable. One may suggest that writing a new BigDecimal (0.1) in Java creates a BigDecimal that is exactly 0.1 (unscaled value 1, with a scale of 1), but in fact it is 0.100000000000000005551115123125782707011111515404041015625. This is because 0.1 cannot be represented exactly as double.

+4
source

When you define a double variable in any way, in most cases it will not be the value you defined, but the closest binary representation. You pass double to the constructor, so you already provide a little inaccuracy.

0
source

All Articles