Your algorithm works fine, and you can prove it using induction.
That is, if it works for any array of size N , prove that it works for any array of size N+1 .
So, given an array of size N+1 , think of it as a sub-array of size N , and then a new element at the end. By assumption, your algorithm evenly selects one of the maximum elements of the submatrix ... And then it behaves as follows:
If the new item is greater than the max sub-array, return that item. This is certainly correct.
If the new item is less than max for the submatrix, return the result of the algorithm to the submatrix. Also obviously correct.
The only slightly difficult part is when the new element is equal to the maximum element of the submatrix. In this case, let the number of maximal elements in the subarray be k . Then, by condition, your algorithm chose one of them with a probability of 1/k . Keeping the same element with probability k/(k+1) , you make the total probability of choosing the same element equal to 1/k * k /(k+1) == 1/(k+1) , as you wish. You also select the last item with the same probability, so we are done.
To complete the inductive proof, just check if the algorithm works on an array of size 1. In addition, for the quality of the goals, correct it so that it does not break into arrays of zero size :-)
[Update]
By the way, this algorithm and its proof are closely related to the Fisher-Yates shuffle (which I always considered the “Knut map drag and drop algorithm”, but Wikipedia says I'm lagging behind).
Nemo
source share