I noticed that C # jitter produces significantly slower code than the C ++ compiler, even if you do not use βmanaged utilityβ constructs (for example, arrays with verified indexing).
To quantify it, I calculated the following simple cycle:
public static int count = 1000000000; public static int Main() { int j = 0; for (int i = 0; i < count; ++i) { j += (i % 2 == 0) ? ((i + 7) >> 3) : (i * 7); } return j; }
This loop takes 3.88 seconds to execute (compiled with / o). An equivalent cycle compiled with VC 2010 (-O2) takes 2.95 s.
To make sure that the lost code was actually generated, I compared the machine codes: created a list (/ FA) from the VC compiler and connected the debugger to the C # program (after the loop was completed).
In fact, the C ++ version uses some smart tricks. For example, to avoid costly multiplication by 7, there is a separate register, which is incremented by 7 each loop counter. The C # version performs imul every time. There are other differences.
I understand that C # jitter has much less time to compile code at runtime than VC at build time. But, for example, Java Jitter dynamically optimizes commonly used methods. C # doesn't seem to do this.
My question is: do you plan to improve C # jitter in future versions of the framework?
kaalus
source share