How does C # evaluate a floating-point number on hover and an immediate window compared to a compiled one?

I see something strange with preserving doubles in the dictionary, and I do not understand why.

Here is the code:

Dictionary<string, double> a = new Dictionary<string, double>(); a.Add("a", 1e-3); if (1.0 < a["a"] * 1e3) Console.WriteLine("Wrong"); if (1.0 < 1e-3 * 1e3) Console.WriteLine("Wrong"); 

The second if statement works as expected; 1.0 not less than 1.0. Now the first if statement evaluates to true. It is very strange that when you hover over if, intellisense tells me false, but the code successfully moves to Console.WriteLine.

This is for C # 3.5 in Visual Studio 2008.

Is this a floating point precision issue? Then why does the second if statement work? I feel like I'm missing out on something very fundamental here.

Any insight is appreciated.

Edit2 (slightly changed the question):

I can accept the mathematical problem of accuracy, but my question now is: why is the freeze properly evaluated? This is also true for the immediate window. I paste the code from the first if statement into the immediate window and it evaluates to false.

Update

First of all, many thanks for all the great answers.

I am also having trouble recreating this in another project on the same machine. Looking at the project settings, I see no differences. Looking at IL between projects, I see no difference. Looking at the showdown, I do not see any visible differences (except for memory addresses). However, when I debug the original project, I see: screenshot of problem

The next window reports that if is false, but the code falls into conditional.

In any case, I think the best answer is to prepare for floating point arithmetic in these situations. The reason I could not allow this was due to debugger calculations that were different from runtime. So many thanks to Brian Gideon and Stephentiron for some very insightful comments.

+7
math floating-point c #
source share
4 answers

This is a floating precision issue.

The second statement works because the compiler counts the expression 1e-3 * 1e3 before releasing .exe.

Look at it in ILDasm / Reflector, it produces something like

  if (1.0 < 1.0) Console.WriteLine("Wrong"); 
+13
source share

The problem here is pretty subtle. The C # compiler does not (always) emit code that computes twice, even if this type is specified. In particular, it emits code that calculates "advanced" precision using x87 instructions, without rounding off intermediate results to double.

Depending on whether 1e-3 is calculated as double or long double, and whether multiplication is calculated in double or long double, you can get any of the following three results:

  • (long double) 1e-3 * 1e3 calculated in long double is 1.0 - epsilon
  • (double) 1e-3 * 1e3 calculated in double is 1.0
  • (double) 1e-3 * 1e3, calculated as a long double, is 1.0 + epsilon

It is clear that the first comparison, which does not meet your expectations, is evaluated as described in the third scenario, which I listed. 1e-3 is rounded to double, either because you save it and load it again, which makes it round, or because C # recognizes 1e-3 as a double-precision literal and treats it that way. Multiplication is evaluated in a long double, because C # has a dead brain model that the code generates by the compiler.

Multiplication in the second comparison is either evaluated using one of the other two methods (you can find out that by trying "1> 1e-3 * 1e3"), or the compiler rounds the result of the multiplication before comparing it to 1.0, when it evaluates the expression at compile time .

You can probably tell the compiler that it does not use extended precision if you do not report it through some build settings; SSE2 coding may also work.

+4
source share

See the answers here.

+2
source share

Um ... weird. I can not reproduce your problem. I also use C # 3.5 and Visual Studio 2008. I typed your example exactly as it was published, and I do not see the Console.WriteLine statement executing.

In addition, the second if statement is optimized by the compiler. When I examine both the debug build and the release in ILDASM / Reflector, I see no evidence of this. This is due to the fact that I get a warning about the compiler, which reports about unreachable code.

Finally, I don't see how this could be a floating point precision issue anyway. Why did the C # compiler statically evaluate two doubles differently than the CLR at runtime? If this were so, then it could be argued that the C # compiler has an error.

Edit:. Having understood a little more, I am sure that this is not a floating point precision problem. You should have either stumbled upon an error in the compiler, or a debugger, or the code you posted does not exactly reflect your current code. I am very skeptical about the error in the compiler, but an error in the debugger seems more likely. Try rebuilding the project and starting it again. Maybe the debugging information gathered with exe is out of sync or something.

+2
source share

All Articles