So the answer is no. This information really does not help. This may help you a little better, but not in big O.
For everyone who suggested hashing to get linear time, you can just as well do the same thing for sorting. This method is called radix / hash collation. It exploded your memory usage.
When there are more restrictions, you can even use cooler tricks (i.e. sum, xor, etc.)
However, for an algorithm that uses comparison only in a generic array, you are not buying a lot, reducing the problem in this way.
To give a simple intuition for this, suppose you have 1 redundancy for each element, so your array is a1, a1, ... an, an (only 2n elements out of n unique numbers).
The size of the solution space is n! (as long as aj-aj are paired, you can rearrange the pair anyway, as indicated in the statement of the problem). The size of the input space is (2n)! / (2 ^ (n)).
This means that your algorithm should receive enough information to order ((2n)! / N!) / (2 ^ n) = (n * (n + 1) * ... 2n) / (2 ^ n) elements. Each comparison gives you 1 bit of information. The number of comparison iterations required is log (n) + log (n + 1) ... + log (2n) -n, which is big_omega (nlog (n)). It is not better or worse than sorting.
Here's a semi-rigid sorting treatment: http://www.cs.cmu.edu/~avrim/451f11/lectures/lect0913.pdf
Perhaps I will bribe to create a similar proof for the current question.
source share