In addition to the previous answers:
System.Random should NEVER be used in simulations or numerical solutions for science and technology, where there are significant negative consequences of inaccurate simulation results or convergence failure. This is because the Microsoft implementation has serious flaws in several respects, and they cannot (or will not) easily fix it due to compatibility issues. See this post .
So:
If there is an attacker who does not need to know the generated sequence , use RNGCryptoServiceProvider or another carefully designed, implemented and cryptographically tested RNGCryptoServiceProvider RNG and, if possible, use hardware randomness. Otherwise;
If this is an application, such as a simulation, that requires good statistical properties , then use a carefully designed and implemented non-cryptographic PRNG, such as Mersenne Twister . (Crypto RNG will also be correct in these cases, but often too slow and cumbersome.) Otherwise;
ONLY if using numbers is completely trivial , for example, deciding which image to show next in a randomized slide show, then use System.Random .
Recently, I encountered this problem very noticeably, while working on a Monte Carlo simulation, designed to test the impact of various models of use of medical devices. The simulation yielded results that gently walked in the opposite direction to what was expected.
Sometimes, when you cannot explain something, there is a reason behind it, and this reason can be very burdensome!
Here is a graph of the p values that were obtained over the growing number of simulation lots:

Red and magenta plots show the statistical significance of the differences between the two usage patterns in the two studied output metrics.
The blue graph is a particularly shocking result because it represents p-values for characterizing random input to the simulation. (This was only built to confirm that the input was not erroneous.) The input, of course, was the same for the two investigated usage patterns, so there should not be a statistically significant difference between the input for the two models. However less, here I saw that 99.97% confidence that there was such a difference!
At first, I thought that something was wrong in my code, but everyone checked. (In particular, I confirmed that the threads are not sharing instances of System.Random .) When re-testing showed that this unexpected result was very consistent, I began to suspect System.Random .
I replaced System.Random with an implementation of Mersenne Twister - no other changes - and immediately the result became dramatically different, as shown here:

This chart reflects the absence of a statistically significant difference between the two models for using the parameters used in this particular test suite. This was the expected result.
Note that in the first graph, the vertical logarithmic scale (in p value) covers seven decades , while in the second - only one decade - demonstrating how pronounced the statistical significance of the false discrepancies is! (A vertical bar indicates the likelihood that discrepancies might occur by chance.)
I suspect that it happened that System.Random has some correlations during some fairly short generator cycle, and different samples of the internal randomness sample between the two tested models (which had a significantly different number of Random.Next calls) were influenced by two models in different ways.
It so happened that the input modeling data are based on the same RNG flows as the models used for internal solutions, and this, obviously, led to the fact that the discrepancies in the sample affected the input data. (This is actually lucky, because otherwise I might not have understood that the unexpected result was a software error, and not some real property of the simulated devices!)