F # performance degradation on x64 target?

I was recently surprised by the behavior of the F # compiler when using the x64 target compared to x86. The same application works with the following time for different purposes:

x86: 68ms Any CPU/x64: 160ms 

For me, these results are strange. The results differ almost twice. I suggested that on a 64-bit processor of a 64-bit operating system, a 64-bit application will run faster than a 32-bit one.

So the question is: what's wrong? Is the problem in the compiler or is it my mistake somewhere?

Environment: Core 2 Duo and Windows 7 x64. Appendix F #: FsYacc / FsLex..Net language parser 4.

+4
source share
2 answers

This can happen for programs that use many data structures with a large number of pointers, since the pointer has 8 bytes on 64-bit, while 4 bytes on 32-bit. The bottleneck in the code for tracking the pointer is cache misses. In the limit where 100% of your code is a chase pointer, you will suffer twice as many misses in the cache in the 64-bit version than in the 32-bit version, therefore, in the case of a 2x slowdown.

For other types of programs, the 64-bit version may be faster than the 32-bit version, at least on x86 / x64. x64 has twice as many general-purpose registers as 32-bit x86, newer instructions, such as SSE / SSE2, will be available on x64, but not on 32-bit x86, and with a lot of addresses you can make different trade-offs between space velocities such as saving instead of recalculating values ​​or displaying large memory files.

+2
source

Have you tried using Int64 instead of Int32 in your application? Have you tried switching to an 8-bit character set? How do these things affect perf?

Do you run a regular hard drive or SSD?

0
source

All Articles