Double loss of comparison accuracy in C #, loss of accuracy occurs when adding double subtraction

Just started to learn C #. I plan to use it for strong mathematical modeling, including a numerical solution. The problem is that I get loss accuracy when adding and subtracting double , as well as when comparing. The code and what it returns (in the comments) is below:

 namespace ex3 { class Program { static void Main(string[] args) { double x = 1e-20, foo = 4.0; Console.WriteLine((x + foo)); // prints 4 Console.WriteLine((x - foo)); // prints -4 Console.WriteLine((x + foo)==foo); // prints True BUT THIS IS FALSE!!! } } } 

Thank you for your help and clarification!

What puzzles me is that (x + foo)==foo returns True .

+5
source share
4 answers

Take a look at the MSDN link for double : https://msdn.microsoft.com/en-AU/library/678hzkk9.aspx

It states that a double has an accuracy of 15 to 16 digits.

But the difference in numbers between 1e-20 and 4.0 is 20 digits. The simple action of trying to add or subtract 1e-20 to or from 4.0 simply means that 1e-20 lost because it cannot fit within 15 to 16 digits of accuracy.

So, as for double , 4.0 + 1e-20 == 4.0 and 4.0 - 1e-20 == 4.0 .

+4
source

In addition to the Enigmativity answer:

To do this, you need higher precision and decimal with an accuracy of 28 to 29 digits and a base of 10:

 decimal x = 1e-20m, foo = 4.0m; Console.WriteLine((x + foo)); // prints 4.00000000000000000001 Console.WriteLine((x - foo)); // prints -3.99999999999999999999 Console.WriteLine((x + foo) == foo); // prints false. 

But be careful that it is true that the decimal value has greater precision, but has a lower range. more about decimal here

+2
source

What you are looking for is probably a decimal structure ( https://msdn.microsoft.com/en-us/library/system.decimal.aspx ). Doubles cannot correctly represent such values ​​with the precision you are looking for ( C # accurate to decimal precision ). Instead, try using the Decimal class, for example:

 decimal x = 1e-20M, foo = 4.0M; Console.WriteLine(Decimal.Add(x, foo)); //prints 4,0000000000000000001 Console.WriteLine(Decimal.Add(x, -foo)); //prints -3,9999999999999999999 Console.WriteLine(Decimal.Add(x, foo) == foo); // prints false 
+1
source

This is not a problem with C #, but with your computer. It is not very difficult or difficult to understand, but it is a long read. You should read this article if you want to have a deep knowledge of how your computer works.

An excellent TL; DR site, which imho is a better introduction to the topic than the above article:

http://floating-point-gui.de/


I will tell you a very short explanation of what is happening, but you should definitely read at least this site in order to avoid future troubles, since your area of ​​application will require deep knowledge.

What happens is this: you have 1e-20, which is less than 1.11e-16. This other number is called the epsilon machine for double precision on your computer (most likely). If you add a number equal to or greater than 1, something smaller than the epsilon of the machine, it will be rounded, back to a large number. This is due to the introduction of IEEE 754. This means that after adding a result that is “correct” (as if you had infinite accuracy), the result is saved in a limited / finite accuracy format that rounds 4.00 .... 001 to 4 because the rounding error is less than 1.11e-16, therefore it is considered acceptable.

0
source

All Articles