How many Rabin-Miller iterations should be used for cryptographic safe numbers?

I am generating a 2048-bit Diffie-Hellman type key security simple, p such that p and (p-1) / 2 are both simple.

How few Rabin-Miller iterations can be used for both p and p-1/2 and still be confident in a cryptographically strong key? In my research, I heard everything from 6 to 64 iterations for 1024-bit regular primes, so I'm a little confused at this point. And once that is established, will the number change if you create a safe prime rather than regular?

Computing time is up to par, so this is a practical question - I basically wonder how to find out the lowest number of tests that I can get away with, and at the same time maintain almost guaranteed security.

+16
cryptography primes
Jun 13 2018-11-11T00:
source share
7 answers

Suppose you select a simple p by choosing random values โ€‹โ€‹until you click the one that Miller-Rabin says: it looks like a simple one. You use n rounds maximum for the Miller-Rabin test. (For the so-called "safe simple", things don't change, except that you run two nested tests.)

The probability that a random 1024-bit integer is around 1/900. Now you donโ€™t want to do something stupid so that you only generate odd values โ€‹โ€‹(even a single-byte integer is guaranteed to be non-standard), and, more generally, you run the Miller-Rabin test only if the value is not "obviously" non-empty ", that is, it can be divided into small primes. Thus, you get about 300 values โ€‹โ€‹with Miller-Rabin before reaching a simple (average). When the value is odd, Miller-Rabin will determine it with a probability of 3/4 in each round, so the number of rounds of Miller-Ra Ina, which you will run on average for one non-main value, is 1+ (1/4) + (1/16) + ... = 4/3. For 300 values, this means about 400 Miller-Rabin rounds, no matter what you choose for n.

So, if you choose n, for example, 40, then the cost implied by n is less than 10% of the total computational cost. In the process of random simple selection, the test for prime numbers that are not affected by the value of n that you choose prevails. I talked about 1024-bit integers here; for large numbers, choosing n is even less important, as primes become more sparse as they grow in size (for 2048-bit integers, "10%" gets "5%" higher).

Therefore, you can choose n = 40 and be happy with it (or at least know that the reduction of n will not buy you anyway). On the other hand, using n greater than 40 does not make sense, since it will lead to lower probabilities than the risk of a simple erroneous calculation. Computers are hardware; they can have random crashes. For example, the primitive check function may return โ€œtrueโ€ for a value without primacy, because a cosmic ray (a high-energy particle racing through the Universe at high speed) gets directly into the right transistor at the right time, turning the return value from 0 (โ€œfalseโ€) up to 1 ("true"). This is very unlikely - but no less likely than a 2 -80 probability. See https://stackoverflow.com> The bottom line is that no matter how you make sure that the integer is prime, you still have the inevitable probability element, and the 40 Miller-Rabin rounds already give you the best you can hope for.

To summarize, use 40 rounds.

+26
Jun 13 '11 at 12:01
source share

The article Estimates of the average error of a case for a strong probable simple Damgard-Landrock-Pomeranes test indicates that if you randomly choose k -bit an odd number n and apply t independent Rabin-Miller tests in a row, the probability that n is a Composite has a much stronger borders.

In fact, for 3 <= t <= k/9 and k >= 21 ,

enter image description here

For a simple k=1024 bit, t=6 iterations give an error rate of less than 10^(-40) .

+9
Jan 30 '14 at 7:42
source share

Each Rabin-Miller iteration reduces the likelihood that the number will be composite 1/4 times.

So, after 64 iterations in 2 ^ 128, there is only 1 chance that the number is composite.

Assuming that you use them for a public-key algorithm (e.g. RSA), and assuming that you combine this with a symmetric algorithm using (say) 128-bit keys, the adversary can guess your key with this probability.

The bottom line is to choose the number of iterations to put this probability in the chalet of other sizes that you choose for your algorithm.

[update, clarify]

The answer depends entirely on what algorithms you are going to use for numbers, and what are the most well-known attacks related to these algorithms.

For example, according to Wikipedia :

Since 2003, RSA Security has claimed that 1024-bit RSA keys are equivalent in strength to 80-bit symmetric keys, 2048-bit RSA keys for 112-bit symmetric keys, and 3072-bit RSA keys for 128-bit symmetric keys.

So, if you plan to use these primes to generate a (say) 1024-bit RSA key, then there is no reason to run more than 40 iterations or so of Rabin-Miller. What for? Because by the time you click the error, the attacker could crack one of your keys.

Of course, there is no reason not to do more iterations if time permits. It just doesn't make much sense.

On the other hand, if you generate 2048-bit RSA keys, 56 (or so) Rabin-Miller iterations are more suitable.

Cryptography is usually built as a composition of primitives such as the primary generation, RSA, SHA-2, and AES. If you want one of these primitives to be 2 times higher than the others, you can, but it's a bit like putting a door into a 10-foot vault in a log cabin.

There is no fixed answer to your question. It depends on the strength of the other parts that are part of your cryptographic system.

All that 2 ^ -128 said is a ridiculously tiny probability, so I would probably just use 64 iterations :-).

+4
Jun 13 '11 at 0:20
source share

From libgcrypt source: /* We use 64 Rabin-Miller rounds which is better and thus sufficient. We do not have a Lucas test implementaion thus we can't do it in the X9.31 preferred way of running a few Rabin-Miller followed by one Lucas test. */ /* We use 64 Rabin-Miller rounds which is better and thus sufficient. We do not have a Lucas test implementaion thus we can't do it in the X9.31 preferred way of running a few Rabin-Miller followed by one Lucas test. */ /* We use 64 Rabin-Miller rounds which is better and thus sufficient. We do not have a Lucas test implementaion thus we can't do it in the X9.31 preferred way of running a few Rabin-Miller followed by one Lucas test. */ cipher / primegen.c line # 1295

+2
Apr 23 '15 at 19:46
source share

I would do two or three iterations of the Miller-Rabin tests (i.e., Fermat's strong probable prime number), making sure that one of the bases is 2.

Then I would run Lucas' strong, probable simple test by selecting D, P, and Q using the method described here: https://en.wikipedia.org/wiki/Baillie%E2%80%93PSW_primality_test

There are no known composites that would pass this combination of Fermat and Lucas tests.

This is much faster than 40 Rabin-Miller iterations. In addition, as Pomerance, Selfridge, and Wagstaff noted in https://math.dartmouth.edu/~carlp/PDF/paper25.pdf , with several Fermat tests the diminishing returns are reduced: if N is pseudo-random for one base, then it is more likely than average to be pseudo-fit for other reasons. That is why, for example, we see that so many psp base 2 are also psp base 3.

+1
Apr 15 '19 at 0:46
source share

A lower probability is usually better, but I would take the actual probability value with a crumb of salt. Albrecht et al. Prime and Prejudice: Primary testing under adverse conditions violates a number of simple testing procedures in cryptographic libraries. In one example, the published probability is 1/2 ^ 80, but the number they build is declared prime 1 out of 16.

In several other examples, their amount passes 100% of the time.

0
Apr 20 '19 at 0:04
source share

Does it matter? Why not run 1000 iterations? When searching for prime numbers, you stop applying the Rabin-Miller test for the first time when it fails, so for the time it takes to find a prime, it really doesn't matter what the upper limit of the number of iterations is. You can even run a deterministic validation algorithm after these 1000 iterations are completely sure.

Thus, the probability that a number is prime after n iterations is 4 ^ -n.

-one
Jun 13 2018-11-11T00: 00Z
source share



All Articles