Fast Iterator Requirements

tl; dr: Is it possible to efficiently implement quicksort in a doubly linked list? My understanding, before thinking about it, is not, it is not.

The other day I got the opportunity to consider the requirements of the iterator for the basic sorting algorithms. Basic O (N²) - quite simple.

  • Bubble sort - 2 forward iterators will do nicely, one is dragged after another.
  • Sort insert - 2 bidirectional iterators. One for the item is out of order, one for the insertion point.
  • Choosing a sort is a bit trickier, but advanced iterators can do the trick.

Quicksort

introsort_loop in std :: sort (as in the standard gnu / hp library (1994) / silicon graphics (1996)) requires it to be random_access.

__introsort_loop(_RandomAccessIterator __first, _RandomAccessIterator __last, _Size __depth_limit, _Compare __comp) 

As I expected.

Now, upon closer inspection, I cannot find the real reason that requires this for quick sorting. The only thing that explicitly requires random_access_iterators is a call to std::__median , which requires the calculation of the middle element. The usual, vanilla quicksort does not calculate the median.

The separation consists of checking

  if (!(__first < __last)) return __first; 

Not a very useful check for bidirectional gears. However, you should replace this with checking the previous section move (from left to right / right to left) with a simple condition

 if ( __first == __last ) this_partitioning_is_done = true; 

Is it possible to implement quicksort quite efficiently using only bidirectional iterators? Recursive depth can still be maintained.

NB. I have not tried to implement a real implementation yet.

+8
c ++ algorithm quicksort stl-algorithm
source share
3 answers

tl; dr: yes

As you say, the problem is to find the pivot element, which is the element in the middle, finding that it takes O (1) with random access, finding it with bidirectional iterators, takes O (n) (n / 2 operations, more precisely ) However, at each step you must create sub-objects, left and right, containing smaller and larger numbers, respectively. This is where the main job of quicksort is, right?

Now, when creating subcontainers (for the recursion phase), my approach would be to create an iterator h pointing to their corresponding front element. Now, when you select the next item to go into the subcontainer, just go h every second time. This means that h points to the rotation element as soon as you are ready to go down to the new recursion step.

You only need to find the first rod that does not matter, because O (n log n + n / 2) = O (n log n).

Actually, it’s just optimization of the runtime, but doesn’t affect complexity, because you iterate through the list once once (to put each value in the corresponding subcontainer) or twice (to find the pivot point, and then put each value in the corresponding auxiliary container) anyway: O (2n) = O (n).
This is just a matter of runtime (not complexity).

+1
source share

You need random access iterators because you usually want to select a rotation element from the middle of the list. If you select the first or last element as a support instead of bidirectional iterators, it’s enough, but then Quicksort degenerates into O (N ^ 2) for pre-sorted lists.

+2
source share

There is absolutely no problem with implementing a quick sort strategy in a doubly linked list. (I think it can also be easily adapted to a singly linked list). The only place in the traditional quicksort algorithm, which depends on the random access requirement, is the configuration step, which uses something “complicated” to select the rotation element. In fact, all these “tricks” are nothing more than a simple heuristic that can be replaced with almost equally effective sequential methods.

I previously used quick-sort for linked lists. This is nothing special, you just need to pay close attention to the correct movement of the element. As you probably understand, a significant part of the value of list sorting algorithms comes from the fact that you can reorder items by reusing them instead of explicitly re-evaluating the values. Not only can this be faster, but (and often - more importantly) preserves the value of external links that can be attached to list items.

PS However, I would say that for linked lists, the merge sort algorithm leads to a significantly more elegant implementation that has equally good performance (unless you are dealing with some cases that work better with quick sort).

+1
source share

All Articles