13

It's my understanding that there's no longer a requisite of safe primes for $q$ and $p$ when choosing a RSA modulus. How is it that this does not change the hardness of factoring $N$?

boran
  • 141
  • 1
  • 3

2 Answers2

17

First we may want RSA primes to be something like a safe prime, ie a prime $p$ where $(p-1)/2$ is prime as well.

Back in 1974 Pollard found an algorithm to factor moduli whereby you can factor $N=pq$ if $p-1$ or $q-1$ are smooth, that is all prime-factors of $p-1$ or $q-1$ are smaller than a bound $B$. The algorithm will then factor $N$ in time $\mathcal O(B\cdot \log B\cdot \log^2N)$. This implies that you want at least one "large" prime factor for both $p-1$ and $q-1$ where large is probably something like $>2^{80}$ or a similar value to match your key strength.

The most "effective" solution is to pick $p,q$ as safe primes of course. In practice however it suffices if they are strong primes because:

  1. you can find those much quicker because the constraints are less severe and;
  2. there are other attacks against which a strong prime will protect you, such as p+1 factoring.

Now the argument is that the Elliptic-Curve Method is a strict generalization / improvement over Pollard's $p-1$ method in finding small factors as it allows one to "retry" if a "bad" group order (that is $p-1$) was encountered, this is the quote from Rivest's paper (see below):

Thus we see that it is useless to protect against factoring attacks by building in large prime factors into $p-1$ or $p+1$, since the enemy cryptanalyst can instead attempt to find an elliptic curve $E_{a,b}$ such that $|E_{a,b}|$ is smooth. The cryptographer has no control over whether $|E_{a,b}|$ is smooth or not, since this is essentially a random number approximately equal to $p$. Only by making $p$ sufficiently large can he adequately protect against such an attack.

If you want to dive further into the history of strong primes, I suggest you read "Are 'Strong' Primes Needed for RSA" by Rivest and Silverman from 1999.

I have used material from section 8.3 of said paper in my above answer. They go on after the above quote and actually list numbers for the probability of breaking random prime factors for 1024 bit moduli and find the chances to be negligible for ECM. As ECM is strictly better than $p-1$ factoring, random primes suffice.

Maarten Bodewes
  • 96,351
  • 14
  • 169
  • 323
SEJPM
  • 46,697
  • 9
  • 103
  • 214
0

just on a side-note: it can have other benefits to use safe primes as RSA factors, namely, you can then use any odd integer as "public" exponent $e$ and derive a corresponding "private" exponent $d$ (disclaimer: I'm not saying this is secure of even recommended to do so!). If $p$ and $q$ are safe primes, then the Euler's totient is $\phi(N) = (p-1)(q-1) = 2p' * 2q' = 2²\ p'q'$ with $p', q'$ prime. Hence, for any* odd $e$ the $\gcd(e, \phi(N)) = 1$ , meaning, a multiplicative inverse $d$ of $e$ modulus $\phi(N)$ exists (for RSA to work you need $e*d\equiv 1 \mod \phi(N)$ ).
* any odd $e \notin \{1, p', q'\} $

Tobsec
  • 23
  • 4