I am implementing a weighted lottery in groovy. This allows some participants to have a better chance of winning than others (mainly as an NBA project). It works by throwing each participant into an array N times, where N is the number of chances to win. Then it selects a random index from this array.
As a good little coder, I wrote a test. He selects the winner from the group 100 times and displays how many times each participant was selected. The expectation is that it will fall approximately in accordance with how many times they need to be selected (based on their number of chances). The results were ... off.
I narrowed down the problem to one line, which, if it is split into two separate statements, works fine. The following is an abridged version of the routine. The βbadβ version is active and the βgood versionβ is commented out.
def randomInRange(int min, int max) { Random rand = new Random() rand.nextInt((max - min) + 1) + min } def bob = [name:'bob', timesPicked:0] def joe = [name:'joe', timesPicked:0] def don = [name:'don', timesPicked:0] def chanceWheel = []
My question is, what is wrong with one version of the line? I suppose his order is the execution of the problem, but I cannot for the life of my figure, where she goes off the rails.
groovy
Lance staples
source share