Pros and Cons of RNGCryptoServiceProvider

What are the pros and cons of using System.Security.Cryptography.RNGCryptoServiceProvider versus System.Random . I know that RNGCryptoServiceProvider is “more random,” that is, less predictable for hackers. Are there any more pros or cons?




UPDATE:

According to the answers, here are the pros and cons of using RNGCryptoServiceProvider at the moment:

Pros

  • RNGCryptoServiceProvider is a stronger cryptographically random number, meaning that it would be better for determining encryption keys and the like.

Cons

  • Random faster because it is a simpler calculation; when used in simulations or long calculations where cryptographic randomness is not important, this should be used. Note: see Kevin’s answer for details about the simulation . Random not necessarily random enough, and you can use another non-cryptographic PRNG.
+67
c # random
Jan 07 '09 at 1:04
source share
4 answers

The cryptographically strong RNG will be slower - it will require more computation --- and will be spectrally white, but will not be so well suited for simulation or Monte Carlo methods, since they take longer and because they may not be repeated, which is good for testing.

In general, you want to use cryptographic PRNG if you want a unique number, such as a UUID, or as a key for encryption, and a deterministic PRNG for speed and simulation.

+51
Jan 07 '09 at 1:11
source share
— -

System.Random not thread safe.

+12
Feb 02 2018-10-02T00
source share

Yes, there is one more. As Charlie Martin wrote System.Random faster.

I would like to add the following information:

RNGCryptoServiceProvider is a standard implementation of a random number generator compatible with security standards. If you need a random variable for security purposes, you should use this class or equivalent, but not use System.Random, because it is very predictable.

For all other purposes, the higher performance of System.Random and equivalent classes is appreciated.

+9
Jan 07 '09 at 1:29
source share

In addition to the previous answers:

System.Random should NEVER be used in simulations or numerical solutions for science and technology, where there are significant negative consequences of inaccurate simulation results or convergence failure. This is because the Microsoft implementation has serious flaws in several respects, and they cannot (or will not) easily fix it due to compatibility issues. See this post .

So:

  • If there is an attacker who does not need to know the generated sequence , use RNGCryptoServiceProvider or another carefully designed, implemented and cryptographically tested RNGCryptoServiceProvider RNG and, if possible, use hardware randomness. Otherwise;

  • If this is an application, such as a simulation, that requires good statistical properties , then use a carefully designed and implemented non-cryptographic PRNG, such as Mersenne Twister . (Crypto RNG will also be correct in these cases, but often too slow and cumbersome.) Otherwise;

  • ONLY if using numbers is completely trivial , for example, deciding which image to show next in a randomized slide show, then use System.Random .




Recently, I encountered this problem very noticeably, while working on a Monte Carlo simulation, designed to test the impact of various models of use of medical devices. The simulation yielded results that gently walked in the opposite direction to what was expected.

Sometimes, when you cannot explain something, there is a reason behind it, and this reason can be very burdensome!

Here is a graph of the p values ​​that were obtained over the growing number of simulation lots:

Input & output _p_ ‑ values ​​while using <code> System.Random </code>

Red and magenta plots show the statistical significance of the differences between the two usage patterns in the two studied output metrics.

The blue graph is a particularly shocking result because it represents p-values ​​for characterizing random input to the simulation. (This was only built to confirm that the input was not erroneous.) The input, of course, was the same for the two investigated usage patterns, so there should not be a statistically significant difference between the input for the two models. However less, here I saw that 99.97% confidence that there was such a difference!

At first, I thought that something was wrong in my code, but everyone checked. (In particular, I confirmed that the threads are not sharing instances of System.Random .) When re-testing showed that this unexpected result was very consistent, I began to suspect System.Random .

I replaced System.Random with an implementation of Mersenne Twister - no other changes - and immediately the result became dramatically different, as shown here:

Input & output _p_ ‑ values ​​after switching to a better PRNG

This chart reflects the absence of a statistically significant difference between the two models for using the parameters used in this particular test suite. This was the expected result.

Note that in the first graph, the vertical logarithmic scale (in p value) covers seven decades , while in the second - only one decade - demonstrating how pronounced the statistical significance of the false discrepancies is! (A vertical bar indicates the likelihood that discrepancies might occur by chance.)

I suspect that it happened that System.Random has some correlations during some fairly short generator cycle, and different samples of the internal randomness sample between the two tested models (which had a significantly different number of Random.Next calls) were influenced by two models in different ways.

It so happened that the input modeling data are based on the same RNG flows as the models used for internal solutions, and this, obviously, led to the fact that the discrepancies in the sample affected the input data. (This is actually lucky, because otherwise I might not have understood that the unexpected result was a software error, and not some real property of the simulated devices!)

+2
Feb 24 '19 at 1:30
source share



All Articles