What are the performance overheads for BigDecimal compared to double?
Lot. For example, multiplying two double numbers is one machine instruction. Multiplying two BigDecimal is probably a minimum of 50 machine instructions and has complexity O(N * M) , where M and N are the number of bytes used to represent two numbers.
However, if your application requires the calculation to be "decimal correct", you need to accept the overhead.
However (# 2) ... even BigDecimal cannot perform this calculation accurate to a real number:
1/3 + 1/3 + 1/3 -> ?
To perform this calculation accurately, you need to implement the Rational type; i.e. a couple of BigInteger values ββ... and some things to reduce common factors.
However (# 3) ... even the hypothetical Rational type will not give you the exact numerical representation for (say) Pi.
Stephen c
source share