3

In this post, I found that choosing RSA modulus $N$ to be product of safe primes avoids Willam's $p + 1$ factoring attack. Suppose $N = p \cdot q$, where $p$, $q$, $(p-1)/2$ and $(q-1)/2$ are primes. In this case, certainly $p-1$ and $q-1$ are not smooth. They have a very large prime factor. But how can we guarantee that $p+1$ and $q+1$ are also not smooth? Why are they guaranteed to have large prime factors?

Patriot
  • 3,162
  • 3
  • 20
  • 66
satya
  • 1,484
  • 10
  • 32

1 Answers1

4

In this post, I found that choosing RSA modulus $N$ to be product of safe primes avoids Willam's $p + 1$ factoring attack. But how can we guarantee that $p+1$ and $q+1$ are also not smooth? Why are they guaranteed to have large prime factors?

Actually, you're tripping over the various notions of "safe primes" vs "strong primes".

"Safe primes" are generally defined as you said, primes $p$ where $(p-1)/2$ is also prime. As you note, it guarantees that $p-1$ has a large prime factor; however it gives no guarantee of $p+1$

"Strong primes" are more variously defined; one common one is a prime $p$ where $p-1$ and $p+1$ both have a known large (for example, > 100 bit) prime factor (and sometimes there are more conditions). With this definition, both $p-1$ and $p+1$ are not smooth.

The ones arguing for more complex RSA key generation methods (the reasoning apparently being that adding complexity always adds to security) generally argue for "strong primes".

On the other hand, it's not at all clear that it adds any real security at all; if your primes $p$ and $q$ are small enough that a random prime has a nontrivial probability to be vulnerable to William's method, then with strong primes of the same size, you have a nontrivial probability of being vulnerable to ECM (and so the sizes of prime you picked as too small).

Here's how it works: William's method works this way: the attacker takes guesses at prime factors; if he guesses all the prime factors of $p+1$ or $q+1$, he can factor (and, other than time, there's no downside to incorrect guesses). (And, Pollard's method works the same with $p-1$ and $q-1$).

ECM works this way: the attacker picks a pseudocurve, which fixes "small" values $\epsilon_p$ and $\epsilon_q$; then, he takes guesses at prime factors, and if he guesses all the prime factors of $p + \epsilon_p$ or $q + \epsilon_q$, he can factor (and, again, other than time, there's no downside to incorrect guesses). [1]

Each iteration of ECM (that is, a guess at a prime factor) does take more time than the corresponding iteration of William's, but only by a small constant factor; as we rarely have that precise an estimate on the attacker's capability, we can generally ignore this constant factor.

What does this mean? We can certainly select $p$ to be a strong prime, where $p+1$ has a large prime factor (which is essentially unguessable), and so William's factoring method has no hope. However, we cannot do the same for $p + \epsilon_p$, as that covers a range around $p$, and we cannot control which pseudocurve the attacker selects.

Hence, selecting strong primes doesn't, in practice, make the factoring any more difficult.


[1]: Real ECM implementations will select a number of pseudocurves, and run them in parallel. They do this to take advantage of the possibility that one of the curves might have a $p + \epsilon_p$ which is exceptionally smooth; however the above argument holds even if the attacker doesn't do that.

poncho
  • 154,064
  • 12
  • 239
  • 382