Why does order affect rounding when adding multiple doubles in C #

Consider the following C # code:

double result1 = 1.0 + 1.1 + 1.2; double result2 = 1.2 + 1.0 + 1.1; if (result1 == result2) { ... } 

result1 should always equal result2 right? The fact is that no. result1 is 3.3, and result2 is 3.000000000000003. The only difference is the order of the constants.

I know that doubles are implemented in such a way that rounding problems can occur. I know that I can use decimals if I need absolute precision. Or that I can use Math.Round () in an if statement. I'm just a nerd who wants to understand what the C # compiler does. Can anyone tell me?

Edit:

Thanks to everyone who has so far offered to read on floating point arithmetic and / or talked about the inherent inaccuracy of the processor doubling. But I feel that the main question of my question still remains unanswered. What is my fault for its incorrect formulation. Let me say the following:

Having interrupted the above code, I would expect the following operations:

 double r1 = 1.1 + 1.2; double r2 = 1.0 + r1 double r3 = 1.0 + 1.1 double r4 = 1.2 + r3 

Suppose that each of the above additions has a rounding error (with the number e1..e4). Thus, r1 contains the rounding error e1, r2 includes the rounding errors e1 + e2, r3 contains e3 and r4 contains e3 + e4.

Now I don’t know exactly how the rounding error occurs, but I would expect e1 + e2 to be equal to e3 + e4. It is clear that this is not so, but it seems to me somehow wrong. Another thing is that when I run the above code, I don't get rounding errors. This makes me think that the C # compiler is doing something weird, not a processor.

I know that I ask a lot, and maybe the best answer anyone can give is to go do a PHD in the processor design, but I just thought I'd ask.

Edit 2

Looking at IL from my source code sample, it’s clear that this is a compiler, not a processor that does this:

 .method private hidebysig static void Main(string[] args) cil managed { .entrypoint .maxstack 1 .locals init ( [0] float64 result1, [1] float64 result2) L_0000: nop L_0001: ldc.r8 3.3 L_000a: stloc.0 L_000b: ldc.r8 3.3000000000000003 L_0014: stloc.1 L_0015: ret } 

The compiler adds numbers for me!

+7
compiler-construction c # precision rounding
source share
7 answers

I would expect e1 + e2 to be equal to e3 + e4.

This is not at all like expected

  floor( 5/3 ) + floor( 2/3 + 1 ) 

equal

  floor( 5/3 + 2/3 ) + floor( 1 ) 

except that you multiply by 2 ^ 53 before you take the word.

Using 12 bit floating point precision and truncating with your values:

 1.0 = 1.00000000000
 1.1 = 1.00011001100
 1.2 = 1.00110011001

 1.0 + 1.1 = 10.00011001100 // extended during sum
 r1 = 1.0 + 1.1 = 10.0001100110 // truncated to 12 bit
 r1 + 1.2 = 11.01001100101 // extended during sum
 r2 = r1 + 1.2 = 11.0100110010 // truncated to 12 bit

 1.1 + 1.2 = 10.01001100110 // extended during sum
 r3 = 1.1 + 1.2 = 10.0100110011 // truncated to 12 bit
 r3 + 1.0 = 11.01001100110 // extended during sum
 r4 = r3 + 1.0 = 11.0100110011 // truncated to 12 bit

Thus, a change in the order of operations / truncations leads to a change in the error, and r4! = R2. If you add 1.1 and 1.2 to this system, the last bit will be carried, so it will not be lost when truncating. If you add 1.0 to 1.1, the last bit of 1.1 will be lost and the result will be different.

In one order, rounding (by truncating) removes the final 1 .

In a different order, rounding removes the trailing 0 both times.

One is not zero; therefore errors do not match.

Guys still have a lot of bits of precision, and C # probably uses rounding rather than truncation, but hopefully this simple model shows that different errors can occur with different orders of the same values.

The difference between fp and maths is that + is a shorthand for 'add then round', not just an addition.

+10
source share

The C # compiler does nothing. CPU has.

if you have A in the CPU register and you then add B, the result stored in this register is A + B, close to the floating precision used

If you add C, the error is added. This addition with an error is not a transient operation, so the final difference is.

+6
source share

See the classic document (what every computer scientist should know about floating point arithmetic) . This kind of thing happens with floating point arithmetic. It takes a computer scientist to tell you that 1/3 + 1/3 + 1/3 is'nt equal to 1 ...

+4
source share

The order of floating point operations is important. Does not answer your question directly, but you should always be careful when comparing floating point numbers. It usually includes tolerance:

 double epsilon = 0.0000001; if (abs(result1 - result2) <= epsilon) { ... } 

This may be of interest: What every computer scientist needs to know about floating point arithmetic

+2
source share

result1 should always equal result2 right?

Wrong . This is true in mathematics, but not in floating point arithmetic .

You will need to read the Numerical Analysis Guide .

+1
source share

Why the errors do not coincide depending on the order can be explained by another example.

Say that for numbers below 10 it can store all numbers, so it can store 1, 2, 3, etc. up to 10, including 10, but after 10 it can only store every second number, to the internal loss of accuracy, in other words, it can only store 10, 12, 14, etc.

Now, in this example, you will see why the following results give different results:

 1 + 1 + 1 + 10 = 12 (or 14, depending on rounding) 10 + 1 + 1 + 1 = 10 

The problem with floating point numbers is that they cannot be represented exactly, and the error does not always go the same, so the order will matter.

For example, 3.00000000003 + 3.00000000003 may end up being 6.00000000005 (notification is not 6 at the end), but 3.00000000003 + 2.99999999997 may end up being 6.00000000001, and with this:

 step 1: 3.00000000003 + 3.00000000003 = 6.00000000005 step 2: 6.00000000005 + 2.99999999997 = 9.00000000002 

but reorder:

 step 1: 3.00000000003 + 2.99999999997 = 6.00000000001 step 2: 6.00000000001 + 3.00000000003 = 9.00000000004 

So that will make a difference.

Now, of course, you might be lucky that the examples above balance each other out, as the first one will be deployed to .xxx1 and the other down to .xxx1, giving you .xxx3 in both, but there is no guarantee.

+1
source share

In fact, you are not using the same values ​​because the intermediate results are different:

 double result1 = 2.1 + 1.2; double result2 = 2.2 + 1.1; 

Since doubles cannot represent decimal values, you get different results.

0
source share

All Articles