std::uniform_real_distribution .
There's a really nice talk from STL from this year's Going Native Conference that explains why you should use standard distributions whenever possible. In short, manual code tends to be ridiculously low quality (think std::rand() % 100 ) or have more subtle homogeneity flaws like in (std::rand() * 1.0 / RAND_MAX) * 99 , which is The example given in the conversation is a special case of the code sent to the question.
EDIT: I took a look at the implementation of libstdc ++ s std::uniform_real_distribution , and here is what I found:
The implementation creates a number in the range [dist_min, dist_max) using a simple linear transformation from some number created in the range [0, 1) . It generates this source number using std::generate_canonical , an implementation I can find here (at the end of the file). std::generate_canonical determines the number of times (denoted by k ), the distribution range, expressed as an integer and denoted here as r *, will fit in the mantissa of the target type. What he then does is to generate one number in [0, r) for each r -separate mantissa segment and, using arithmetic, fill each segment accordingly. The formula for the obtained value can be expressed as
Σ(i=0, k-1, X/(r^i))
where X is the stochastic variable in [0, r) . Each division by range is equivalent to a shift by the number of bits used to represent it (i.e. log2(r) ), and thus fills the corresponding mantissa segment. Thus, all the accuracy of the target type is used, and since the range of results is [0, 1) , the indicator remains 0 ** (modulo bias), and you do not get the uniformity problems that you have when you start tinkering with the indicator.
I would not believe that this method is cryptographically secure (and I have suspicions of possible errors "one after another" when calculating the size r ), but I think that it is much more reliable in terms of uniformity than the Boost implementation you published, and definitely better than looking for std::rand .
It may be worth noting that the Boost code is actually a degenerate case of this algorithm, where k = 1 , which means that it is equivalent if the input range requires at least 23 bits to represent its size (single-point IEE 754) or at least 52 bits (double precision). This means a minimum range of ~ 8.4 million or ~ 4.5e15, respectively. In light of this information, I don’t think that if you use a binary generator, the Boost implementation is quite going to cut it off.
After a brief overview of the libC ++ s implementation , it looks like they are using the same algorithm, implemented a little differently.
(*) r is actually the input range plus one. This allows you to use the max urng value as valid input.
(**) Strictly speaking, the coded metric is not 0 because IEEE 754 encodes the implicit beginning of 1 before the base of the character. Conceptually, however, this is not relevant to this algorithm.