What is the temporary difficulty in sorting sleep?

Given this sorting algorithm, how do you express its temporal complexity?

Originally presented here (partial archive) .

#!/bin/bash function f() { sleep "$1" echo "$1" } while [ -n "$1" ] do f "$1" & shift done wait example usage: ./sleepsort.bash 5 3 6 3 6 3 1 4 7 
+59
time-complexity
Jun 24 2018-11-22T00:
source share
7 answers

I think paxdiablo is the closest, but not for the right reason. The complexity of time ignores problems on real hardware, such as cache sizes, memory limits, and in this case, a limited number of processes and the scheduler.

Based on the Wikipedia page for temporary complexity I would say that you cannot determine the complexity of the execution, because if it is defined as:

The complexity of the time is usually estimated by counting the number of elementary operations performed by the algorithm, where the elementary operation takes a fixed amount of time to complete. Thus, the amount of time and the number of elementary operations performed by the algorithm are no more than a constant factor.

Then we can’t talk about the complexity of this algorithm, because the time that is performed by elementary operations is significantly different from the fact that the time taken will differ by more than a constant factor.

+15
Jun 25 '11 at 5:00
source share

O(max(input)+n)

Complexity just seems awkward, since most sorting algorithms are agnostic data. Their time scales with the amount of data, not the data.

FWIW, as indicated here , is not a reliable data sorting algorithm.

+26
Jun 24 '11 at 10:23
source share

One question that no one seems to have considered is how these sleep implemented. They end up somewhere in the scheduler, and the operational complexity will depend on the scheduling algorithm used. For example, if sleep placed as events on the priority queue, you will most likely get something that is equivalent to heapsort, with complexity O (n log n). A naive scheduling algorithm can lead to O (n ^ 2).

+19
Jul 11 2018-11-11T00:
source share

Both the complexity of the time and the complexity of the process for this O(braindead) algorithm.

With a sufficiently large value in the data set, you will wait for an answer until the sun explodes.

With a sufficiently large data set size, you will

  • (1) hit your process limit; and
  • (2) find that the early dreams end before the last ones begin, which means that the set (2,9,9,9,9,9,...,9,9,1) will not sort correctly 1 and 2 .

In this case, the complexity of time does not matter. You cannot be less optimized than the "wrong" ones.

It’s good to use complexity analysis to compare algorithms with changing the size of the data set, but not when the algorithms are ridiculous in the first place :-)

+10
Jun 24 2018-11-22T00:
source share

If you read the thread, you will see that your question has already been answered. The complexity of time is O(max(input)) and the operational complexity (number of operations) is O(n) .

+2
Jun 24 '11 at 10:25
source share

Although it looks linear, I think the complexity is still O (log (n) * max (input)).

When we talk about the asymptotic complexity of time, this means how much time it takes when n grows infinitely large.

A comparison-based sorting algorithm cannot be faster than O (n * log (n)), and Sleep-Sort is actually based on a comparison:

Processes fall asleep n seconds and wake up. The OS should find the shortest remaining sleep time from the entire sleep process and wake it if it is time.

This will require a priority queue, which takes O (logN) time, inserting an element, and O (1) finds the minimum element, and O (logN) removes the minimum element.

When n becomes very large, it will take more than 1 second to wake up the process, which makes it larger than O (n).

+2
Mar 19 '13 at 7:30
source share

I am with Jordan, except that, in my opinion, the complexity of a wall clock is better expressed as O (2 ^ m), where m is the size of each element, and not O (max (input)).

If each element has size m, the largest element will have an integer value of 2 ^ m (minus one, but nobody cares about that). By construction, the algorithm requires that the setup time be less than 1, a constant.

Thus, the wall time complexity is O (2 m), the complexity of the operation counter is O (n).

The modified algorithm, taking into account the installation time, is likely to have a time complexity of the O (2 ^ m + n) scale. For example, he could mark the current time at the beginning, calculate base_time = start_time + k*len(list) (for some suitable constant k), and then the threads of base_time+i sleeping until time. Then k*len(list) is obviously O (n) and i is equal to O (2 ^ m), as before, for all O (2 ^ m + n).

+1
Jun 09 '13 at 2:16 on
source share



All Articles