"Simulate" a 64-bit integer with two 32-bit integers

I am writing a very computationally intensive procedure for a mobile device, and I will limit myself to 32-bit processors. Essentially, I run point products from huge datasets (> 12k, signed with 16-bit integers). Floating point operations are too slow, so I was looking for a way to do the same calculation with integer types. I stumbled upon something called the “Floating Point” arithmetic (p. 17 in a related article). This is a good job, but now I run into a 32-bit problem, just not enough to store the output of my calculations with sufficient accuracy.

Just to clarify, the reason why accuracy is not enough is that I would have to drastically reduce the accuracy of each of the elements in my array in order to eventually get a number suitable for a 32-bit integer. This is a sum of ~ 16,000 things that make my result so huge.

Is there a way (I would like to refer to an article or a textbook) to use two 32-bit integers as the most significant word and the least significant word and define arithmetic on them (+, -, *, /) to process the data? Also, are there perhaps better ways to do such things? Is there a problem with this approach? I use the programming language quite flexibly. I would prefer C / C ++, but java works. I'm sure someone has done this before.

+5
source share
5 answers

I am sure that the JVM should support the 64-bit arithmetic type long, and if the platform does not support it, then the virtual machine should simulate it. However, if you cannot afford to use it floatfor performance problems, the JVM will probably destroy you.

Most C and C ++ implementations provide 64-bit arithmetic emulated for 32-bit purposes. I know that MSVC and GCC. However, you should be aware that you can talk about many whole instructions to save a single floating point command. You should keep in mind that the specifications for this program are unfounded, or it is possible that you can free up productivity from another place.

+7
source

, 64- :

long val; // Java

#include <stdint.h>
int64_t val; // C
+4

. , - ?

+2

Java, : Java long. Java 64 . JVM . , 64- . , JVM .

Java, long's, BigInteger. .

+2

Speaking of C / C ++.
Any normal compiler will support the "long long" type as a 64-bit integral with all the usual arithmetic.
Combined with -O3, he has a very good chance of finding the best possible code for 64-bit arithmetic on your platform.

+2
source

All Articles