Im currently reading Code Complete Steve McConnell, specifically page 295 on floating point numbers.
When I ran the following code:
double nominal = 1.0; double sum = 0.0; for (int i = 0; i < 10; i++) { sum += 0.1; Console.WriteLine("sum: " + sum.ToString()); } if (equals(nominal,sum)) { Console.WriteLine("Numbers are the same"); } else { Console.WriteLine("Numbers are different"); }
I got a printout 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 The numbers are different
Why didn't I get the conclusion that is supposed to happen? i.e.: 0.1 0.2 +0.30000000000000004 0.4 0.5 0.6 0.7 +0.79999999999999999 +0.89999999999999999 +0.99999999999999999 The numbers differ from each other
Is rounding C # numbers when I do an implicit conversion from double to string? I think so because when I debug the application and go through the for loop, I can see non-stop repeating decimal numbers. What do you think? Am I right or wrong?
floating-point c #
burnt1ce
source share