In fact, there is a smart algorithm for this. We will use A to indicate the array, N to indicate the size of the array, and N to indicate the number of shifted positions. After the change, you would like the i-th element to move to the position ((i + n) mode N)-th , so we can define new positions by the following display:
f(j) := (j + n) mod N (j = 0,...,N - 1)
The general idea of ββthis algorithm is as follows: we do not want to move elements around more than necessary, so ideally we would just like to place each element in the correct (shifted) position on the first try. Say we start with an element at position i . We want to move the element at position i to the position f(i) , but then we will rewrite the element at this position, so we need to first save the element at position f(i) , and then perform a shift.When we moved the first element, we need to select another item to switch. Since we want to save space, the obvious candidate is the element that we just saved (the element that was in position f(i) ). As before, we save the item at f(f(i)) , and then copy our saved item to this position. We continue to repeat this process (going through the positions i, f(i), f(f(i)), f(f(f(i))), ... ) until we reach the element that we have already shifted (which we guarantee, since there is an end position). If we go through all the elements, then we will end; if not, we will choose another element (which has not yet been shifted), say, at position j and repeat the procedure (going through j, f(j), f(f(j)), f(f(f(j))), ... ). It. But before we can implement such an algorithm, or even before we decide whether it is really a good algorithm, we need to answer a few questions:
Say we repeat the positions i, f(i), f(f(i)), ... How can we say that we have reached a position that has already been shifted? Do we need to maintain every position we went through? If this is the case, then this means that we need to hold an array of size N (to cover all positions), and we also need to search every time we move an element. this would make the algorithm terribly inefficient. Fortunately, this is not necessary, since the sequence i, f(i), f(f(i)), ... must wrap around itself in position i , so we can only wait until we reach this position. We can prove this advantage in the following way: suppose that the first repeating element that we meet is not i . Then we must have two different elements, which during the shift will reach the same position - a contradiction.
Suppose we have completed the transition through i, f(i), f(f(i)), ... but there are still elements that remain unbiased (we can say by counting how many elements we have shifted). How now to find the position j that contains such an element? And also, as soon as we finished this second iteration (going through j, f(j), f(f(j)), ... ), how do we find the third position k with an unbiased element? etc. This may also suggest that we need to save the array to account for used / unused elements and reschedule the search every time we need to find an unused element. However, we can relax again, because, as we will soon show, all the initial positions (which we have designated i , j and k ) are adjacent. this means that if we start from position i , we will choose i + 1 , and then i + 2 , etc.
can the sequences i, f(i), f(f(i)), ... and j, f(j), f(f(j)), ... (where i and j different) contain common elements ? If they do, it will mean that the algorithm is useless, as it can double-shift the same element, causing it to be in the wrong position. Then the answer (of course) is that they cannot contain common elements. And we will show why.
Denote d := gcd(N, n) . For each of the integers: i = 0,...,d - 1 we define the following set:
S(i) := { kd + i | k = 0,...,N/d - 1}
It is easy to see that the sets S(0),...,S(d - 1) together cover the set {0,...,N - 1} . We also notice that when dividing an element in the set S(i) by d we remain with the remainder i , and dividing an element from another set S(j) by d will leave us with another remainder ( j ). Thus, no two sets contain a common element. Moreover, we established that the sets S(0),...,S(d - 1) form a partition {0,...,N - 1}
Now, for each i = 0,...,d - 1 , we define the set T(i) as i, f(i), f(f(i)), ... By the definition of f we can write T(i) as follows:
T(i) = {(kn + i) mod N | k is an integer}
Note that if x is an element of T(i) , then we can write for some k :
x = (kn + i) mod N = (k(n/d)d + i) mod N
Denote z := k(n/d) mod N/d , then, multiplying by d , we have:
kn mod N = zd
and therefore
x = (kn + i) mod N = zd + i
So x also in S(i) . Similarly, if we take some y from S(i) , we note that for some k :
y = kd + i
Since gcd(n/d, N/d) = 1 there exists q such that q(n/d) mod N/d = 1 (modular inverse), so we can write (multiplying by kd ):
kd = kqn mod N
and therefore
y = kd + i = ((kq)n + i) mod N
So y also in T(i) . We conclude that T(i) = S(i) . From this fact, we can easily show our previous advantages. First, since the sets form a partition into {0,...,N - 1} , the third statement holds (without two sequences containing a common element). Secondly, by the definition of the sets S(i) we can take any group of d adjacent elements in {0,...N - 1} , and each of them will be placed in a different set. This satisfies the second statement.
What this means is that we can rotate all the elements at positions 0, d, 2d, ..., (N/d - 1)d , simply replacing the element at position n mod N with the element at position 0 , element at position 2n mod N with an element at position n mod N , etc. until we return to the element at position 0 (which we are sure will happen). Here is an example of pseudo code:
temp <- A[0] j <- N - (n mod N) while j != 0 do A[(j + n) mod N] <- A[j]; j <- (j - n) mod N A[n mod N] <- temp;
This covers the entire set S(0) . To cover the remaining sets, A S(1), ... ,S(d-1) , we will simply iterate over each set in the same way as for the first:
for i <- 0 to d - 1 temp <- A[i] j <- N - ((n - i) mod N) while j != i do A[(j + n) mod N] <- A[j]; j <- (j - n) mod N A[(i + n) mod N] <- temp;
Note that although we have two nested loops, each element moves exactly once, and we use the space O(1) . Java implementation instance:
public static int gcd(int a, int b) { while(b != 0) { int c = a; a = b; b = c % a; } return a; } public static void shift_array(int[] A, int n) { int N = A.length; n %= N; if(n < 0) n = N + n; int d = gcd(N, n); for(int i = 0; i < d; i++) { int temp = A[i]; for(int j = i - n + N; j != i; j = (j - n + N) % N) A[(j + n) % N] = A[j]; A[i + n] = temp; } }