You can pinpoint which method is faster, with an accuracy of the 10,000th millisecond, also known as “Tick” in the .Net infrastructure.
This is done using the Stopwatch class in the System.Diagnostic namespace:
var sw1 = new System.Diagnostics.Stopwatch(); var sw2 = new System.Diagnostics.Stopwatch(); //Version 1 sw1.Start(); for (int num = 0; num < 100; num += 2) { Console.WriteLine(num); } sw1.Stop(); //Version 2 sw2.Start(); for (int num = 0; num < 100; num++) { if (num % 2 == 0) { Console.WriteLine(num); } } sw2.Stop(); Console.Clear(); Console.WriteLine("Ticks for first method: " + sw1.ElapsedTicks); Console.WriteLine("Ticks for second method: " + sw2.ElapsedTicks);
The output will show that the first method is faster.
Why is this so? In the first version, ignoring the console output, only one operation is performed ( += 2 ), and at the end the program performs 50 cycle cycles.
In the second version, you need to perform two operations ( ++ and % 2 ) and one more comparison ( num % 2 == 0 ) in addition to 100 cycles in a cycle.
If you measure the program in the same way, then it will have a big time difference, for example, a few milliseconds. This is because Console.WriteLine actually takes a lot of time. Since the second version did it 50 times more, it takes much more time. If you want to measure only one algorithm, omit the console output.
If you do, the tick difference on my machine averages 24 ticks to 43 ticks.
So, in conclusion, the first method is about 19/10000 milliseconds more efficient.
source share