Link: How to calculate the time complexity algorithm
I found a good article related to How to calculate the time complexity of any algorithm or program
The most common metric for calculating time complexity is Big O notation. This removes all the constant factors, so the runtime can be estimated with respect to N when N approaches infinity. In general, you can think of it this way:
statement;
Constantly. The execution time of the instruction will not change with respect to N.
for ( i = 0; i < N; i++ ) statement;
It is linear. The cycle time is directly proportional to N. When N doubles, the run time.
for ( i = 0; i < N; i++ ) { for ( j = 0; j < N; j++ ) statement; }
Quadratically. The run time of two cycles is proportional to the square of N. When N doubles, the run time increases by N * N.
while ( low <= high ) { mid = ( low + high ) / 2; if ( target < list[mid] ) high = mid - 1; else if ( target > list[mid] ) low = mid + 1; else break; }
Logarithmically. The running time of the algorithm is proportional to the number of times when N can be divided by 2. This is due to the fact that the algorithm divides the workspace in half with each iteration.
void quicksort ( int list[], int left, int right ) { int pivot = partition ( list, left, right ); quicksort ( list, left, pivot - 1 ); quicksort ( list, pivot + 1, right ); }
Is N * log (N). Runtime consists of N cycles (iterative or recursive) that are logarithmic, so the algorithm is a combination of linear and logarithmic.
In general, something with each element in one dimension is linear, something with each element in two dimensions is quadratic, and dividing the workspace in half is logarithmic. There are other Big O dimensions, such as the cubic, exponential, and square root, but they are not so common. The designation Big O is described as O (), where is the measure. A quick sort algorithm will be described as O (N * log (N)).
Note that none of this takes into account the best, medium, and worst measures. Each will have its own Big O notation. Also note that this is a VERY simplified explanation. Big O is the most common, but it is also more complex than I showed. There are other notations, such as large omega, small o and large theta. You probably won't run into them outside the course of analyzing algorithms.;)
Edit:
Now the question is how log n got into the equation:
- For each step, you invoke the algorithm recursively in the first and second half.
- So - the total number of steps needed is the number of times it takes to reach n to 1 if you divide the problem by 2 each step.
Equation: n / 2 ^ k = 1. Since 2 ^ logn = n, we get k = logn. Thus, the number of iterations that the algorithm requires is O (logn), which the O (nlogn) algorithm will do
In addition, the large O notation makes it easy to calculate - a platform-independent approximation as to how the algorithm will behave asymptotically (at infinity), which can divide the "family" of the algorithm into subsets of their complexity, and let's easily compare between them.
You can also check this question for more information: Temporal complexity of a program using the repetition equation