I noticed an interesting behavior with rounding / truncating float by the C # compiler. Namely, when a floating-point literal goes beyond the guaranteed representable range (7 decimal digits), then: a) explicitly throws the floating-point result in a float (semantically unnecessary operation) and b) storing the results of the intermediate calculation in a local variable as well changes the output. Example:
using System; class Program { static void Main() { float f = 2.0499999f; var a = f * 100f; var b = (int) (f * 100f); var c = (int) (float) (f * 100f); var d = (int) a; var e = (int) (float) a; Console.WriteLine(a); Console.WriteLine(b); Console.WriteLine(c); Console.WriteLine(d); Console.WriteLine(e); } }
Output:
205 204 205 205 205
In the debug assembly, JITted on my computer b is calculated as follows:
var b = (int) (f * 100f); 0000005a fld dword ptr [ebp-3Ch] 0000005d fmul dword ptr ds:[035E1648h] 00000063 fstp qword ptr [ebp-5Ch] 00000066 movsd xmm0,mmword ptr [ebp-5Ch] 0000006b cvttsd2si eax,xmm0 0000006f mov dword ptr [ebp-44h],eax
whereas d is calculated as
var d = (int) a; 00000096 fld dword ptr [ebp-40h] 00000099 fstp qword ptr [ebp-5Ch] 0000009c movsd xmm0,mmword ptr [ebp-5Ch] 000000a1 cvttsd2si eax,xmm0 000000a5 mov dword ptr [ebp-4Ch],eax
Finally, my question is: why is the second line of output different from the fourth? Does this additional fmul mean such a difference? Also note that if the last (already unrepresentable) digit from the float f is deleted or even reduced, everything βfalls into placeβ.
compiler-construction floating-point c #
Alan
source share