I like Codor's suggestion to use shifts and have zero extras!
But if you can really use additions and other operations, such as shifts, logs, subtractions, etc., I believe that the minimum number of additions to calculate a * b will be:
min{int[log2(a+1)] + numbits(a), int[log2(b+1)] + numbits(b)} - 2
Where
Now how did we get there? First look at your original example. You can combine groups together. For instance.
8+8=16 16+16=32 32+8=40
To summarize this, if you need to multiply b times, only using the additions that you used or the calculation results already calculated, you need:
int [log2 (b + 1)] - 1 addition for calculating all the intermediate numbers you need 2 ^ na
- In your example int [log2 (5 + 1)] - 1 = 2: you need 2 additions to calculate 16 and 32
numbits (b) -1 additions to add all intermediate results together, where numbits (b) is the number of units in binary representation b.
- In your example, 5 = 2 ^ 2 + 2 ^ 0, so numbits (5) -1 = 1: you need 1 add-on to make 32 + 8
I wonder what that means is your statement
add the bigger number smaller number of times
Not always a recipe for minimizing the number of additions.
For example, if you need to calculate 2 ^ 9 * (2 ^ 9 - 1), you are better off calculating additions based on (2 ^ 9-1) than 2 ^ 9, although 2 ^ 9 more. Fastest way:
x = (2^9-1) + (2^9-1)
And then
x = x+x
8 times for 9 supplements.
If instead you added 2 ^ 9 to yourself, you will need 8 additions to get all 2 ^ k * 2 ^ 9 first, and then an additional 8 additions to add all these numbers together for a total of 16 additions.
source share