Strange compiler behavior with float literals vs float variables

I noticed an interesting behavior with rounding / truncating float by the C # compiler. Namely, when a floating-point literal goes beyond the guaranteed representable range (7 decimal digits), then: a) explicitly throws the floating-point result in a float (semantically unnecessary operation) and b) storing the results of the intermediate calculation in a local variable as well changes the output. Example:

using System; class Program { static void Main() { float f = 2.0499999f; var a = f * 100f; var b = (int) (f * 100f); var c = (int) (float) (f * 100f); var d = (int) a; var e = (int) (float) a; Console.WriteLine(a); Console.WriteLine(b); Console.WriteLine(c); Console.WriteLine(d); Console.WriteLine(e); } } 

Output:

 205 204 205 205 205 

In the debug assembly, JITted on my computer b is calculated as follows:

  var b = (int) (f * 100f); 0000005a fld dword ptr [ebp-3Ch] 0000005d fmul dword ptr ds:[035E1648h] 00000063 fstp qword ptr [ebp-5Ch] 00000066 movsd xmm0,mmword ptr [ebp-5Ch] 0000006b cvttsd2si eax,xmm0 0000006f mov dword ptr [ebp-44h],eax 

whereas d is calculated as

  var d = (int) a; 00000096 fld dword ptr [ebp-40h] 00000099 fstp qword ptr [ebp-5Ch] 0000009c movsd xmm0,mmword ptr [ebp-5Ch] 000000a1 cvttsd2si eax,xmm0 000000a5 mov dword ptr [ebp-4Ch],eax 

Finally, my question is: why is the second line of output different from the fourth? Does this additional fmul mean such a difference? Also note that if the last (already unrepresentable) digit from the float f is deleted or even reduced, everything β€œfalls into place”.

+7
compiler-construction floating-point c #
source share
3 answers

Your question can be simplified to ask why these two results differ from each other:

 float f = 2.0499999f; var a = f * 100f; var b = (int)(f * 100f); var d = (int)a; Console.WriteLine(b); Console.WriteLine(d); 

If you look at the code in .NET Reflector, you will see that the code above is really compiled, as if it were the following code:

 float f = 2.05f; float a = f * 100f; int b = (int) (f * 100f); int d = (int) a; Console.WriteLine(b); Console.WriteLine(d); 

Floating-point calculations may not always be accurate. The result of 2.05 * 100f not exactly 205, but slightly less due to rounding errors. When this intermediate result is converted to a whole, truncation is performed. When saved as a float, it is rounded to the nearest representable form. These two rounding methods give different results.


Regarding your comment on my answer when you write this:

 Console.WriteLine((int) (2.0499999f * 100f)); Console.WriteLine((int)(float)(2.0499999f * 100f)); 

The calculations are completely performed in the compiler. The above code is equivalent to this:

 Console.WriteLine(204); Console.WriteLine(205); 
+5
source share

In the comment you asked

Are these rules different?

Yes. Or rather, the rules allow for different behaviors.

And if so, I should know about this, either from a C # or MSDN reference document, or is it just a random discrepancy between the compiler and the runtime

This is implied by the specification. Floating-point operations have a certain minimum level of accuracy that must be performed, but the compiler or runtime is allowed to use greater precision if it sees fit. This can lead to large, observable changes when you perform operations that increase small changes. Rounding, for example, can turn an extremely small change into an extremely large one.

This fact leads to quite frequently asked questions. For some background of this situation and other situations that may lead to similar discrepancies, see the following:

Why does this floating point calculation give different results on different machines?

C # XNA Visual Studio: the difference between "release" and "debug" modes?

Does CLR JIT optimization violate causality?

https://stackoverflow.com/questions/2494724

+4
source share

Mark is right about the compiler. Now let's fool the compiler:

  float f = (Math.Sin(0.5) < 5) ? 2.0499999f : -1; var a = f * 100f; var b = (int) (f * 100f); var c = (int) (float) (f * 100f); var d = (int) a; var e = (int) (float) a; Console.WriteLine(a); Console.WriteLine(b); Console.WriteLine(c); Console.WriteLine(d); Console.WriteLine(e); 

the first expression is meaningless, but prevents compiler optimization. Result:

 205 204 205 204 205 

ok, I found an explanation.

2.0499999f cannot be saved as a float, because it can only contain 7 10 digits. and this literal is 8 digits, so the compiler rounded it because it could not store. (should give an IMO warning)

if you change to 2.049999f result will be expected.

+2
source share

All Articles