Knut is very good at randomness.
We are not very good at randomness. How can one be predictably random? Still, pseudo-random sequences may seem completely random in statistical tests.
There are three categories of random number generators, which are amplified in the comments above.
First, you have pseudo-random number generators, where if you know the current random number, it is easy to calculate the following. This makes it easy to convert other numbers if you learn a few.
Then there are cryptographic algorithms that make this a lot more complicated. I believe that they are still pseudo-random sequences (unlike what is implied in the comment above), but with the very important property that knowing several numbers in a sequence does NOT make it obvious how to calculate the rest. The way it works is that crypto procedures tend to hash the number, so if one bit changes, each bit can equally change as a result.
Consider a simple modular generator (similar to some implementations in C rand ())
int rand () {return seed = seed * m + a; }
if m = 0 and a = 0, this is a lousy generator with a period of 1: 0, 0, 0, 0, .... if m = 1 and a = 1, this is also not a very random form: 0, 1, 2, 3, 4, 5, 6, ...
But if you choose m and a for primes around 2 ^ 16, it will affect the beautiful looking very random if you accidentally inspect it. But since both numbers are odd, you will see that the low bit will switch, i.e. The number will be too odd and even. Not a big random number generator. And since there are only 2 ^ 32 values ββin a 32-bit number, by definition after 2 ^ 32 iterations, you repeat the sequence again, making it obvious that the generator is NOT random.
If you think of medium bits as good and scrambled, while lower bits are not so random, then you can build a better random number generator from several of them, with different XORed bits together, so that the whole bit is well covered. Something like:
(rand1 () β 8) ^ rand2 () ^ (rand3 ()> 5) ...
However, each number is switched synchronously, which makes it predictable. And if you get two consecutive values, they are correlated, so if you build them, you get the lines on the screen. Now imagine that you have rules that combine generators, so that sequential values ββare not as follows. for example
v1 = rand1 () β 8 ^ rand2 () ... v2 = rand2 () β 8 ^ rand5 () ..
and imagine that the seeds are not always moving forward. Now you are starting to do something much more difficult for reverse engineering forecasting, and the sequence is longer.
For example, if you calculate rand1 () each time, but only push the seed to rand2 () every third time, the generator that combines them may not repeat much longer than the period of one of them.
Now imagine that you are pumping your (rather predictable) random number generator modulo type through DES or some other encryption algorithm. This will copy the bit.
Obviously there are better algorithms, but this gives you an idea. Numerical Recipes implements many algorithms implemented in code and explained. One very good trick: to generate not one, but a block of random values ββin the table. Then use an independent random number generator to select one of the generated numbers, generate a new one and replace it. This disrupts any correlation between adjacent pairs of numbers.
The third category is the actual hardware random number generators, for example, based on atmospheric noise
http://www.random.org/randomness/
This, according to modern science, is truly random. Perhaps someday we will find that it obeys some basic rule, but at present we cannot predict these values, and they are βtrulyβ random, as far as we know.
The boost library has excellent C ++ implementations of Fibonacci generators, acting kings of pseudo-random sequences, if you want to see some source code.