Can dual initialization with a small integer value be used with precision in the context of a BigDecimal?

It is well documented that using a double can lead to inaccuracies and that BigDecimal guarantees accuracy if there is no doubling in the mix.

However, is accuracy guaranteed if the double considered is a small integer?

For example, although the following will be inaccurate / unsafe:

BigDecimal bdDouble = new BigDecimal(0.1d); // 0.1000000000000000055511151231257827021181583404541015625 

will it always be accurate / safe?

 BigDecimal bdDouble = new BigDecimal(1.0d); // 1 

Can we assume that a small integer doubles safely for use with BigDecimals - if so, what is the smallest integer that introduces inaccuracy?

β†’ More information in response to the original answers:

Thanks for answers. Very helpful.

Just adding a little more detail, I have a legacy interface that delivers doubles, but I can be sure that these doubles will represent integers, being converted from strings to doubles, via Double.parseDouble (String), where String is a guaranteed integer representation.

I don't want to create a new interface that passes me Strings or BigDecimals if I can avoid it.

I can immediately convert double to BigDecimal on my side of the interface and do all the internal calculations using calls to BigDecimal, but I want to be sure that it is safe, like creating a new BigDecimal / String interface.

Given that in my original example using 0.1d, 0.1 is inaccurately obtained, which can be seen from the fact that the actual BigDecimal is 0.1000000000000000055511151231257827021181583404541015625, it seems that some fractions will introduce inaccuracy.

On the other hand, given that in my original example using 1.0d, exactly 1 is obtained, it seems that the integers are neat. Apparently this is guaranteed to a value of 2 ^ 53 if I understand your answers correctly.

Is this a correct guess?

+5
source share
2 answers

The BigDecimal aspect is not relevant to this question as "what is the range of integers that can be exactly represented in double ?" - that each final double value can be represented exactly by BigDecimal and that you get the value if you call the constructor BigDecimal(double) . That way, you can be sure that if the value you want to represent is an integer that is exactly represented in double , if you pass this double constructor to the BigDecimal constructor, you will get BigDecimal , which exactly represents the same integer number.

The significance of a double is 52 bits. Due to normalization, this means that you should expect to be able to store integer values ​​in the range [-2 53 2 53 ]. These are pretty big numbers.

Of course, if you only represent integers, you doubt why you use double at all ... and you need to make sure that any conversions that you use from the original source data before double do not lose information loss, but only on the question of which a range of integers is precisely represented as double values. I think this is right ...

+7
source

The short answer is no. Due to the fact that the variable with a floating point is stored in memory, there is no "small" value 0.000001 uses the same number of bits as 100000, each value is represented in the same way 0.xxx..eyy

The best way to initialize BigDecimal is to initialize it with a string.

BigDecimal bdDouble = new BigDecimal ("0.1");

-2
source

All Articles