Possible duplicate:
Floating point division versus floating point multiplication
Recently, I wrote a program that calculates how much time my computer takes to calculate real multiplications, divisions and additions.
for this, I used the functions QueryPerformanceFrequency and QueryPerformanceCounter to get time intervals.
I tested my program using 6,000,000 iterations: 6,000,000 multiplications, divisions and sums (with floating variables) and get the following results:
OS = Windows Vista (TM) Home Premium, 32-bit (Service Pack 2) Processor = Intel Core (TM)2 Quad CPU Q8200 Processor Freq = 2.33 GHz Compiler = Visual C++ Express Edition nΒΊ iterations time in micro seconds 6000000 x real mult + assignment -> 15685.024214 us 6000000 x real div + assignment -> 51737.441490 us 6000000 x real sum + assignment -> 15448.471803 us 6000000 x real assignment -> 12987.614348 us nΒΊ iterations time in micro seconds 6000000 x real mults -> 2697.409866 us 6000000 x real divs -> 38749.827143 us 6000000 x real sums -> 2460.857455 us 1 Iteration time in nano seconds real mult -> 0.449568 ns real div -> 6.458305 ns real sum -> 0.410143 ns
Is it possible that division is six times slower than multiplication, and addition is almost equal to multiplication (~ 0.42 ns)?
source share