There is a trivial O (n ^ 2) algorithm, but you can do it in O (n). For example:.
A = [a, b, c, d, e]
P = [4, 3, 2, 0, 1]
We can exchange each element in A with the necessary element required by P , after each swap in the right position there will be another element and do it in a circular manner for each of the positions (swap elements marked with ^ s):
[a, b, c, d, e] <- P[0] = 4 != 0 (where a initially was), swap 0 (where a is) with 4 ^ ^ [e, b, c, d, a] <- P[4] = 1 != 0 (where a initially was), swap 4 (where a is) with 1 ^ ^ [e, a, c, d, b] <- P[1] = 3 != 0 (where a initially was), swap 1 (where a is) with 3 ^ ^ [e, d, c, a, b] <- P[3] = 0 == 0 (where a initially was), finish step
After one circle, we will find the next element in the array that will not remain in the correct position, and do it again. Thus, at the end you will get the desired result, and since each position touches a constant time (no more than one operation (swap) is performed for each position), this is O (n) time.
You can save information about which one is in the right place:
set the corresponding entry in P to -1, which cannot be restored: after the operations described above, P will become [-1, -1, 2, -1, -1] , which means that only the second one may not be in the correct position, but one more the step will make sure that it is in the correct position and completes the algorithm;
set the corresponding entry in P to -n - 1 : P becomes [-5, -4, 2, -1, -2] , which can be restored to O (n) trivially.
Ziyao wei
source share