Not a basic math or basic cs, I just fool myself with python (usually by creating scripts for modeling / theory on video games), and I discovered how bad a random case is. I wondered why random.randint or random.randrange are used / created the way they are. I created a function that produces (for all purposes and actual purposes) the same results for random.randint:
big_bleeping_float= (2**64 - 2)/(2**64 - 2)
def fastrandint(start, stop):
return start + int(random.random() * (stop - start + big_bleeping_float))
There is a massive 180% speed increase using this to generate an integer in the range (inclusive) 0-65 compared to random.randrange (0, 66), the next fastest method.
>>> timeit.timeit('random.randint(0, 66)', setup='from numpy import random', number=10000)
0.03165552873121058
>>> timeit.timeit('random.randint(0, 65)', setup='import random', number=10000)
0.022374771118336412
>>> timeit.timeit('random.randrange(0, 66)', setup='import random', number=10000)
0.01937231027605435
>>> timeit.timeit('fastrandint(0, 65)', setup='import random; from fasterthanrandomrandom import fastrandint', number=10000)
0.0067909916844523755
, random.choice 75% , , ( ). , fastrandint, :
>>> timeit.timeit('int(random.random() * (65 + big_bleeping_float))', setup='import random; big_bleeping_float= (2**64 - 2)/(2**64 - 2)', number=10000)
0.0037642723021917845
, , , , , , ?