Are 64-bit programs larger and faster than 32-bit versions?

I suppose I'm focused on x86, but I'm usually interested in switching from 32 to 64 bits.

Logically, I see that constants and pointers, in some cases, will be larger, so programs are likely to be larger. And the desire to allocate memory at word boundaries for efficiency will mean more gaps between the selections.

I also heard that 32-bit mode on x86 should clear its cache when switching contexts due to possible overlapping 4G address spaces.

So what are the real benefits of 64-bit?

And as an additional question, is 128 bit even better?

Edit:

I just wrote my first 32-bit program. It makes linked lists / trees from 16-byte version (32b) or 32-byte (64-bit version) objects and does a lot of printing on stderr - not a very useful program, and not something typical, but this is my first one.

Size: 81128 (32b) v 83672 (64b) - so no big difference

Speed: 17 s (32b) v 24s (64b) - works on a 32-bit OS (OS-X 10.5.8)

Update:

I note that a new hybrid x32 ABI (Application Binary Interface) is being developed, which is 64b but uses 32b pointers. For some tests, this leads to smaller code and faster execution than 32b or 64b.

https://sites.google.com/site/x32abi/

+60
performance 64bit 32-bit 128bit
Mar 04 '10 at 10:20
source share
8 answers

If you do not need access to more memory, which allows you to 32-bit addressing, the benefits will be small, if any.

When working on a 64-bit processor, you get the same memory interface, regardless of whether you use 32b or 64b code (you use the same cache and the same bus).

While the x64 architecture has a few more registers, which simplifies optimization, this often counteracts the facts, now pointers are larger and the use of any structures with pointers leads to an increase in memory traffic. I would rate the increase in overall memory usage for the 64b application compared to 32b, which should be around 15-30%.

+21
Mar 04 '10 at
source share

I usually see a 30% speed improvement for code intensive on x86-64 compared to x86. This is most likely due to the fact that we have 16 x 64-bit general-purpose registers and 16 x SSE-registers instead of 8 x 32-bit general-purpose registers and 8 x SSE-registers. This is due to the Intel ICC compiler (11.1) on Linux x86-64 - the results with other compilers (e.g. gcc) or with other operating systems (e.g. Windows) can be different, of course.

+34
Mar 04 '10 at 11:18
source share

Regardless of the advantages, I would advise you to always compile your program for the standard default word size (32-bit or 64-bit), because if you compile the library as 32-bit binary code and provide it on a 64-bit system, you get everyone who wants to link your library to provide their library (and any other dependencies in the library) as a 32-bit binary when the default 64-bit version is available. This can be quite unpleasant for everyone. If in doubt, indicate both versions of your library.

As for the practical advantages of 64-bit ... the most obvious is that you get a larger address space, so if you have a mmap file, you can address it most at a time (and load larger files into memory). Another advantage is that, assuming that the compiler does an excellent job of optimization, many of your arithmetic operations can be parallelized (for example, placing two pairs of 32-bit numbers in two registers and performing two additions in operations with one addition) and a large number Computing will work faster. However, the whole 64-bit and 32-bit thing will not help you with asymptotic complexity in general, therefore, if you want to optimize your code, you should probably look at the algorithms and not at such constant factors.

EDIT :
Please do not pay attention to my parallel adding statement. This is not performed by the usual add statement ... I was confused with what is with some of the vectorized / SSE instructions. A more accurate benefit, in addition to a larger address space, is that there are more general purpose registers, which means that more local variables can be stored in the CPU register file, which is much faster for access than if you put the variables in (which is usually means exit to cache L1).

+14
Mar 04 '10 at 10:36
source share

In addition to having more registers, by default 64-bit has SSE2. This means that you can actually do some calculations in parallel. SSE extensions also had other goodies. But I believe that the main advantage is not to check for extensions. If it has x64, it has SSE2 .... If my memory serves me correctly.

+3
Dec 21 '12 at 12:55
source share

In a specific case, from x68 to x68_64, a 64-bit program will be about the same size, if not slightly smaller, use a little more memory and run faster. This is mainly due to the fact that x86_64 has not only 64-bit registers, but also twice as much. x86 does not have enough registers to make compiled languages ​​as efficient as possible, so x86 code spends a lot of instructions and memory bandwidth by moving data between register and memory. x86_64 has a lot less, and therefore it takes up a little less space and is faster. In x86_64, vector floating point and bit-twiddling instructions are also more efficient.

In general, 64-bit code is not necessarily faster and usually larger, both for using the code and for memory at run time.

+2
Mar 04 '10 at 10:44
source share

The only rationale for moving your application to 64-bit is the need for more memory in applications such as large databases or ERP applications with at least 100 simultaneous users, where the 2 GB limit will be quickly exceeded if the application cache is better work. This is especially true for Windows, where the integer and long are still 32 bits (they have a new variable _int64. Only pointers are 64 bits. In fact, WOW64 is highly optimized on Windows x64, so 32-bit applications work with a low penalty of 64- Windows OS X. My experience with Windows x64 is a 32-bit version of the application that runs 10-15% faster than 64-bit, because in the first case, at least for patented memory databases, you can use an arithmetic pointer to support b-tree (most of the processor part of database systems data) Intensive applications requiring large decimal places for maximum precision that cannot be doubled on a 32-64-bit operating system.These applications can use _int64 instead of the original software emulation. Of course, more than 32 bits will also be improved on large disk bases , to the ability to use large memory for caching query plans, etc.

+2
Dec 02 '12 at 3:20
source share

Any applications that require CPU usage, such as transcoding, display performance and media resistance, whether audio or visual, will certainly require (for now) and benefit from using 64-bit or 32-bit due to the ability of a processor with a huge amount of data, thrown at him. This is not so much a question of address space as a way of processing data. A 64-bit processor, given the 64-bit code, will work better, especially with mathematically complex things such as transcoding and VoIP data. In fact, any β€œmath” application can benefit from the use of 64-bit processors and operating systems. Prove me wrong.

+1
Sep 16 '15 at 9:33
source share

More data is transferred between the CPU and RAM for each memory sample (64 bits instead of 32), so 64-bit programs can be faster provided that they are written so that they use it correctly.

0
Mar 04 '10 at 10:39
source share



All Articles