Java: inaccuracy using double

Possible duplicate:
Preserve precision with paired in java
Strange floating point behavior in a Java program

I am making a histogram class and I am having a strange problem.

Here is the basic information about the class, there are more methods, but they are not related to the problem.

private int[] counters; private int numCounters; private double min, max, width; public Histogram(double botRange, double topRange, int numCounters) { counters = new int[numCounters]; this.numCounters = numCounters; min = botRange; max = topRange; width = (max - min) / (double) numCounters; } public void plotFrequency() { for (int i = 0; i < counters.length; i++) { writeLimit(i * width, (i + 1) * width); System.out.println(counters[i]); } } private void writeLimit(double start, double end) { System.out.print(start + " <= x < " + end + "\t\t"); } 

The problem occurs when I draw frequencies. I created 2 instances. new histogram (0, 1, 10); new histogram (0, 10, 10);

This is what they infer.

 Frequecy 0.0 <= x < 0.1 989 0.1 <= x < 0.2 1008 0.2 <= x < 0.30000000000000004 1007 0.30000000000000004 <= x < 0.4 1044 0.4 <= x < 0.5 981 0.5 <= x < 0.6000000000000001 997 0.6000000000000001 <= x < 0.7000000000000001 1005 0.7000000000000001 <= x < 0.8 988 0.8 <= x < 0.9 1003 0.9 <= x < 1.0 978 Frequecy 0.0 <= x < 1.0 990 1.0 <= x < 2.0 967 2.0 <= x < 3.0 1076 3.0 <= x < 4.0 1048 4.0 <= x < 5.0 971 5.0 <= x < 6.0 973 6.0 <= x < 7.0 1002 7.0 <= x < 8.0 988 8.0 <= x < 9.0 1003 9.0 <= x < 10.0 982 

So my question is: why am I getting really long decimal limits in the first example, but not the second?

+4
source share
4 answers

Some decimal places cannot be represented exactly with double values. 0.3 is one of these values.

All integer values ​​are less than some number (I forget that) have an exact double representation, so you do not see the approximation.

Consider how we think of numbers: the number 123 is represented as (1 * 100) + (2 * 10) + (3 * 1). We use 10 as our base. Binary numbers use two. So, when you look at fractions of a number, how could you represent 0.3 by adding separate powers of 2? You can not. The best thing you can come up with is around 0.00000000000000044 (I will need to see the exact binary digits to see how it will achieve this).

+1
source

are not accurate.

This is because there are infinite possible real numbers and only a finite number of bits to represent these numbers.

take a look at: what every programmer needs to know about floating point arithmetic

+5
source

From the floating point guide :

Because internally computers use a format (binary floating point) that cannot represent a number exactly, like 0.1, 0.2, or 0.3.

When the code is compiled or interpreted, your "0.1" is already rounded to the nearest number in this format, which leads to a small rounding error even before the calculation.

This is your first example. The second includes only integers, not numbers, and integers can be represented exactly in binary floating-point format (up to 52 bits).

+4
source

In the second case, they are rounded. See Also Problem with Ruby - Multiplication is the same problem.

0
source

All Articles