Is multiplication of two numbers a constant time algorithm?

Suppose I write

int a = 111; int b = 509; int c = a * b; 

So what is the time complexity of computing a * b? How is the multiplication operation performed?

+8
c ++ operators
source share
5 answers

Compiling this function:

 int f(int a, int b) { return a * b; } 

With gcc -O3 -march=native -m64 -fomit-frame-pointer -S gives me the following assembly:

 f: movl %ecx, %eax imull %edx, %eax ret 

The first command ( movl ) loads the first argument, the second command ( imull ) loads the second argument and multiplies it by the first - then the result is returned.

The actual multiplication is done using imull , which - depending on your type of processor - will take a certain number of processor cycles.

If you look at the Agner Fog instruction synchronization tables , you can see how long each instruction will take. On most x86 processors this seems like a small constant, however, the imul instruction on AMD K8 with a 64-bit argument and the result is displayed as 4-5 CPU cycles. However, I do not know if there was a measurement problem or really a variable time.

Also note that there are other factors, not just runtime. The integer must be moved through the processor and get to the right place to get the multiplication. All this and other factors delay, which is also noted in the Agner Fog tables. There are other problems, such as cache problems, which also make life difficult - it's not so simple to say how quickly something will work without starting it.


x86 is not the only architecture, and in fact it is unthinkable, there are CPUs and architectures that have a variable time multiplication. This is especially important for cryptography, where algorithms using multiplication can be prone to temporary attacks on these platforms.

+11
source share

Multiplication on most common architectures will be permanent. The load time of the registers may vary depending on the location of the variables (L1, L2, RAM, etc.), but the number of cycle operations will be constant. This contrasts with operations such as sqrt , which may require additional loops to achieve some precision.

you can get training costs here for AMD, Intel, VIA: http://www.agner.org/optimize/instruction_tables.pdf

+2
source share

By time complexity, I suppose you mean, does it depend on the number of digits a and b? Thus, whether the number of CPU cycles of the processor will vary depending on whether you multiplied by 2 * 3 or 111 * 509. I think yes, they will change, and this will depend on how this architecture implements the operation of multiplication and how intermediate results are saved. Although there can be many ways to do this, a simple / primitive way is to implement multiplication using a binary adder / subtracter scheme. Multiplying a * b adds a to itself b times using n-bit binary adders. Similarly, dividing a / b is subtracting b from a until it reaches 0, although more space is needed to store the private and the remaining.

+1
source share
 void myfun() { int a = 111; int b = 509; int c = a * b; } 

De assemble part:

 movl $111, -4(%ebp) movl $509, -8(%ebp) movl -4(%ebp), %eax imull -8(%ebp), %eax 

So, as you can see, it all depends on the imull , in particular the CPU extraction, decoding and execution cycle.

0
source share

In your example, the compiler will do the multiplication, and your code will look like

 int c = 56499; 

If you changed your example to look like

 int c = a * 509; 

then the MIGHT compiler decides to rewrite your code, for example

 int c = a * ( 512 - 2 - 1 ); int c = (a << 9) - (a << 1) - a; 

I said, perhaps because the compiler compares the cost of using the shirt with the price of the multiplication team and chooses the best option. Given a quick, concise instruction, this usually means that only 1 or maybe 2 shifts will be faster.

If your numbers are too large to match an integer (32 bits), then arbitrary procedures are used between O (n ^ 2) and O (n log n) arbitrary precision when n is the number of 32-bit parts needed to store numbers .

0
source share

All Articles