12

I am working on a device whose OS provides an RSA Private primitive, where the inputs are the message, and the usual components of a private key. Unfortunately it is bugged so that in some cases of supplying garbage for the private key, the device panics instead of returning an error code.

To try and avoid panics I would like to perform some pre-checks on the private key components. However, due to limited device resources, I'd prefer not to install a full Bignum library.

Are there any simple calculations I can do on the key components to check the key is valid? Time-efficiency is not a problem.

For example I can do peasant multiplication to verify $n = pq$; but what about validating $d_P$, $d_Q$, $q_{\mathrm{inv}}$ and $d$ ?

Mike Edward Moras
  • 18,161
  • 12
  • 87
  • 240
M.M
  • 223
  • 2
  • 6

4 Answers4

14

Mathematical checks of the consistency of an RSA key can include: $$\begin{align*} e&\text{ odd}\\ n&\text{ odd, and of prescribed bit size (if any)}\\ p&\text{ odd, and of prescribed bit size (if any)}\\ q&\text{ odd, and of prescribed bit size (if any)}\\ n&=p\cdot q\\ 1&=(d_p\cdot e)\bmod(p-1)\\ 1&=(d_q\cdot e)\bmod(q-1)\\ 1&=(q\cdot q_\text{inv})\bmod p\\ d_p&<p-1\;\text{ [redundant with }d_p=d\bmod(p-1)\text{ ]}\\ d_q&<q-1\;\text{ [redundant with }d_q=d\bmod(q-1)\text{ ]}\\ q_\text{inv}&<p\\ d_p&=d\bmod(p-1)\\ d_q&=d\bmod(q-1)\\ \end{align*}$$ All parameters must be non-negative. In the above, $\bmod$ is akin to the %operator in C or java, extended to large integers. The upper range checks for $d_p$ and $d_q$ (and to some degree $q_\text{inv}$) are not mathematically indispensable, but are very customary and simplify the check of $d$, hence are included.

As pointed out by Maarten Bodewes, a primality check of $p$ and $q$ could also be performed; that will catch most random errors on these parameters. This will involve operations on large numbers though. And, chances that an accidental modification of the private key cause $p$ or $q$ to become composite while passing all the above checks is low.

In addition, some devices have extra requirements or/and perform extra checks that are not mathematically necessary. When not met, this could cause the observed device panic; in that case it would be a good idea to identify these requirements/checks, and externally check that they are met. Here is a list, believed exhaustive for industry-standard devices/libraries that I have happened to meet:

  • $d<n$, or perhaps the stronger $d<(p-1)(q-1)$, or the even stronger $d<(p-1)(q-1)/\gcd(p-1,q-1)$; that later one might be gaining traction since it is in FIPS 186-4, Appendix B.3.1 additional requirement 3(a) (at least for key generation).
  • $2^{2k-1}<n<2^{2k}$ for some specified values of $k$ (like $k\ge512$ with $k$ multiple of $2^w$ for some $w\ge7$, as in ANSI X9.31:1998; I have also seen lower minimum $k$, and lower $w$ down to 3); FIPS 186-4 requires $2k\in\{1024,2048,3072\}$ (at least for key generation).
  • If the above applies, that $2^{k-1}<p<2^k$ or the stronger $2^{k-1/2}<p<2^k$ (same bounds for $q$); FIPS 186-4 specifies the later, stronger bounds (at least for key generation).
  • That $|p-q|$ is above some threshold; FIPS 186-4 uses $2^{k-100}$. Notice that proper random generation of $p$ and $q$ makes this overwhelmingly probable, so much that this check is often (safely) skipped at key generation.
  • $e$ less than some threshold, like $2^{32}$ in an old Windows cryptoAPI, or $2^{256}$ in FIPS 186-4, or $n$ (that one is commonly specified), or $2^{2k}$ with $k$ as above.
  • $e\ge3$ (necessary for security), or $e\ge2^{16}+1$ (helping towards security of some variants of RSA with questionable padding), or $e=2^{16}+1$ (the most common value), or $e=3$ (the value leading to the simplest and fastest implementation of the public-key function, and believed safe for proper RSA padding).
  • I've seen code that requires some special condition on the high order bits of $n$ or/and $p$ and/or $q$ for reliable estimation of quotient in some modular reduction; or have other odd requirements on the public modulus; if that's not well documented, that's a bug.
  • I've once seen an implementation with a tendency to fail for large values of $q_\text{inv}$, and wondered if forcing $p<q$, or vice versa, would prevent that failure (note that exchanging $p$ and $q$ changes $q_\text{inv}$, though).

Various conditions beyond being prime are often specified for $p$ (and similarly $q$) at key generation; like, $p-1$ or/and $p+1$ having a large prime factor (a requirement in FIPS 186-4 for 1024-bit $n$); but I have never seen that checked after key generation (and such check would be uneasy). Similarly, I have seen requirements for a minimal value of $d$, but have never seen that enforced after key generation.

Note: it is rarely made actual use of both $d$ and $(p,q,d_p,d_q,q_\text{inv})$ for the same private key; sometime things will work without $d$, or with only $(n,e,d)$; that would simplify external checks.

A check that $n=p\cdot q$ can be conveniently performed modulo one auxiliary modulus $r$ (or a few relatively coprime moduli). We compute $\widehat n=n\bmod r$, $\widehat p=p\bmod r$, $\widehat q=q\bmod r$, and check that $\widehat n=(\widehat p\cdot\widehat q)\bmod r$. Convenient values of $r$ are $2^{32}-1$ (because modular reduction is easy; like casting out nines, only in base $2^{32}$ rather than base 10), and $2^{32}$ (which operates only on the low-order 32 bits of $n$, $p$, $q$; and thus should not be the only $r$ used for a check). As a consequence of the Chinese Remainder Theorem, these heuristic checks modulo $r$ become a proof if enough $r$ are used that the Least Common Multiple of all the $r$ is at least $n$.

An equivalent shortcut for operations involving $\bmod$ with a large right-hand operator is much more complex. It seems to involve computing the quotient. Assuming $e$ is less than 32-bit, to check that $1=(d_p\cdot e)\bmod(p-1)$ we might compute $x=\lfloor(d_p\cdot e)/p-0.05\rfloor$ using 64-bit floating point ($x$ will be less than $e$, thus fit 32 bits, and most often the correct quotient), then check that $x\cdot(p-1)+1=d_p\cdot e$ using reduction modulo a small $r$ (or a few ones), as above; if that check fails, we increase $x$ by one (always finding the right quotient if $d_p$, $e$ and $p$ are consistent), then repeat the check. But when the quotient gets large, basically we need full blown bignum arithmetic.

CAUTION: beware that most of the above checks (except those manipulating only $n$ and/or $e$) manipulate secret material, and thus can be vulnerable to side-channel attacks. These checks must be done in a safe environment (or protected against side channels such as timing or power analysis, which is complex).

fgrieu
  • 149,326
  • 13
  • 324
  • 622
6

Probably the easiest way to do this is to take a canonical encoding of the private key and compare it to a hash value. I presume a hash function is needed anyway to do anything sensible with the private key. The hash value over the contents of the key would normally be called a key check value.

Assuming that the sensitive private key values cannot be read (or replaced completely) it should be impossible even for an attacker to create a hash value for an altered key. You should however make sure that the sensitive parts of your key cannot be replaced for your protocol if you want to protect against active attacks.


You could also create a hash over the modulus and compare that with a stored hash. This is called a key ID. You could then perform the checks that fgrieu has supplied. Or you could simply perform your own signature generation and check it internally against the public key - I presume this operation is already available to you. If one of the components is off the signature should not verify.

Beware that different values may be used for "plain" RSA using just $e$ and $n$ or RSA using the components required for the Chinese Remainder Theorem when doing the checks described by fgrieu or when doing the signature verification. CRT parameters may not always be available - a key can be valid without them.

Maarten Bodewes
  • 96,351
  • 14
  • 169
  • 323
2

The simplest solution would be to use OpenSSL's RSA_check_key() function.

Or from the command line:

$ openssl rsa -check -in good.key -noout
RSA key ok

# (change 1 bit of key data)
$ openssl rsa -check -in bad.key -noout
RSA key error: dmp1 not congruent to d
r3mainer
  • 2,073
  • 15
  • 17
0

As you write that you can call the private and the public operations on the imported key, why don't you simply use these two operations to verify that the key is not corrupted? After importing the key, just try calling the available functions catching possible exceptions. Of course this works best, if you know also the matching public key.