Yes, I think it will be biased. uuencode requires 3 bytes for each 4 output characters. since you give it 8 bytes, the last byte is a filling of some (non-random) type and which will offset the 12th character (and slightly affect the 11th).
can try
head -c 9 /dev/random | uuencode -m -
(instead of 9 instead of 8) and publish the results? which should not have the same problem.
ps also you will no longer need to discard the complement "=", since this is a multiple of 3.
http://en.wikipedia.org/wiki/Uuencoding
pps is, of course, statistically significant. you expect a natural change in sqrt (mean), which (guesses) sqrt (2000) or about 40. Thus, three deviations from this, +/- 120 or 1880-2120 should contain 99% of the letters - you see something much more systematic.
ppps is a neat idea.
ooops I just realized -m for uuencode force base64, not for the uudecode algorithm, but the same idea applies.
andrew cooke
source share