Adding 64-bit support for existing 32-bit code is difficult?

There is a library that I create for different 32-bit platforms. Now you need to support 64-bit architectures. What are the most common strategies for expanding existing 32-bit code to support 64-bit architectures? Should I use #ifdef or something else?

+6
c 64bit embedded 32bit-64bit
source share
3 answers

The amount of effort will depend entirely on how well the source code is written. In the best case, no effort will be made other than re-compiling. In the worst case, you will have to spend a lot of time making your code "64-bit clean."

Common problems:

  • size assumptions int / long / pointer / etc
  • assignment of pointers <=> ints
  • relying on conversion of default arguments or functions (i.e. there are no function prototypes)
  • Incorrect printf / scanf format specifiers
  • assumptions about the size / alignment / filling of structures (especially regarding file or network I / O or interaction with other APIs, etc.)
  • unacceptable casts when performing pointer arithmetic with byte offsets
+16
source share

Just don't rely on the assumption of machine word size? always use sizeof, stdint.h etc. Unless you rely on different library calls for different architectures, #ifdefs should not be necessary.

+1
source share

The simplest strategy is to build what you have with 64-bit settings and test it. Some code does not need to be changed at all. Other code, usually with the wrong assumptions about the size of ints / pointers, will be much more fragile and needs to be modified to be architecture independent.

Very often binaries containing binary entries cause most problems. This is especially true in environments where ints grows from 32-bit to 64-bit when switching to a 64-bit build. This is primarily due to the fact that integers are initially written to files in their current (32-bit) length and are read using the wrong length in a 64-bit assembly, where ints are 64-bit.

-one
source share

All Articles