I am doing the exercises in the Introduction to Algorithm section of CLRS. This is not a classified homework or anything else, I'm just trying to understand the problem.
The problem is this:
We can express insertion sorting as a recursive procedure as follows. In to sort A [1..n] we recursively sort A [1..n-1] and then insert A [n] into the sorted array A [1..n-1]. Write a repeat for the runtime of this recursive version of the sort insert.
Solution:
Since in the worst case, it takes O (n) time to insert A [n] into the sorted array A [1..n -1], we obtain the recurrence T (n) = O (1) if n = 1, T ( n-1) + O (n) if n> 1. The solution to this repetition is T (n) = O (n ^ 2).
So, I get this if n = 1, then it is already sorted, so it takes O (1) time. But I donβt understand the second part of the repetition: Part O (n) is the step at which we insert an element that is sorted into an array that takes the worst time O (n) is the case when we need to go through the whole array and insert it at the end .
It's hard for me to understand the recursive part (T (n-1)). Does T (n-1) have that each recursive call we sort by one smaller element of the array? This does not seem right.
sorting algorithm recursion recurrence
Harrison
source share