It is well documented that using a double can lead to inaccuracies and that BigDecimal guarantees accuracy if there is no doubling in the mix.
However, is accuracy guaranteed if the double considered is a small integer?
For example, although the following will be inaccurate / unsafe:
BigDecimal bdDouble = new BigDecimal(0.1d);
will it always be accurate / safe?
BigDecimal bdDouble = new BigDecimal(1.0d);
Can we assume that a small integer doubles safely for use with BigDecimals - if so, what is the smallest integer that introduces inaccuracy?
β More information in response to the original answers:
Thanks for answers. Very helpful.
Just adding a little more detail, I have a legacy interface that delivers doubles, but I can be sure that these doubles will represent integers, being converted from strings to doubles, via Double.parseDouble (String), where String is a guaranteed integer representation.
I don't want to create a new interface that passes me Strings or BigDecimals if I can avoid it.
I can immediately convert double to BigDecimal on my side of the interface and do all the internal calculations using calls to BigDecimal, but I want to be sure that it is safe, like creating a new BigDecimal / String interface.
Given that in my original example using 0.1d, 0.1 is inaccurately obtained, which can be seen from the fact that the actual BigDecimal is 0.1000000000000000055511151231257827021181583404541015625, it seems that some fractions will introduce inaccuracy.
On the other hand, given that in my original example using 1.0d, exactly 1 is obtained, it seems that the integers are neat. Apparently this is guaranteed to a value of 2 ^ 53 if I understand your answers correctly.
Is this a correct guess?