Random.NextDouble () (a double from the range [0.0,1.0)) is sometimes multiplied by a large Int64 (let Int64 be large = 9000000000L), and the result is superimposed to obtain a random Int64 value greater than that obtained from Random.Next () (Int32 from the range [0, Int32.MaxValue)).
Random r = new Random(); long big = 9000000000L; long answer = (long) (r.NextDouble() * big);
It seems to me that the total number of unique values for Double in the range [0.0, 1.0) provides an upper bound for the number of unique Int64 that it can generate. The free upper bound, in fact, like many different pairs, will be mapped to the same Int64.
Therefore, I would like to know: what is the total number of unique values for double in the range [0.0, 1.0)?
Even better, if you can tell me that the largest value of "large" can be taken so that the "answer" can be a value from the range [0, large] and whether the distribution of the values of the "answer" is uniform, assuming that Random.NextDouble ( ) is uniform.
Edit: Double (double) here refers to the IEEE 754 double floating point, while Int64 (long) and Int32 (int) refer to 64-bit and 32-bit subscription 2nd additions, respectively.
Inspired by this question: Creating 10 digits of a unique random number in java
While I used C #, this question is a linguistic agnostic and is more related to discrete mathematics than programming, but it bothers me not only because of a sense of mathematical curiosity, but also from a programmer who wants to use the formula only if it does what he has to do, and in terms of security.
blizpasta
source share