10

I need to generate a CA (4096-bit RSA) and server keys for openvpn and I want them to be "top quality". Here is my plan:

  • gather entropy from multiple sources (saving individual files): FreeBSD Yarrow, Linux (with haveged daemon while entropy_avail is 4096), ANU QRNG, something else (e.g. wifi capture file encrypted with a random password, other suggestions)
  • mix all random files into a single one
  • use the result to seed openssl -rand [myrndfile] when generating the keys

If my plan sounds ok:

  • what size should the random files be?
  • what would be the best method to mix the files into a single one?
Mike Edward Moras
  • 18,161
  • 12
  • 87
  • 240
Vincent
  • 101
  • 1
  • 1
  • 3

2 Answers2

8

First off, using '-rand' is only seeding the OpenSSL RNG. It can be 1 byte or 1 TB. It's only used as a seed to get things started internally. Then, OpenSSL will use the systems entropy to actually generate the primes needed by RSA.

Further, entropy is just a measure of unpredictability in a sequence, not an actual pool of stored bits. The larger the estimation on entropy, the more likely certain things will have unpredictable behaviors, such as a sequence of random numbers.

Again though, the Linux kernel file /proc/sys/kernel/random/entropy_avail is just an estimation. When you have an entropy pool of "4096 bits", this just means that the random numbers being generated have the highest quality of unpredictability you can produce. As the entropy pool estimation drops, the confidence in the sequence of random numbers also drops. When the entropy pool is 0, the kernel will block at generating random data until the pool can be filled again. As you generate random data, it reduces the estimation on entropy. This is the behavior of /dev/random.

In other words, think of the entropy pool as a "crypto thermometer". When the meter is at "full up", the generated numbers will be very difficult, if not near impossible to reproduce, and highly unpredictable. When the meter is completely empty, generating random bits could be reproduced by a 3rd party with little knowledge of the system accurately.

So, getting to your OpenSSL key question. OpenSSL will want the kernel to keep entropy as full as possible. However, OpenSSL will seed from /dev/urandom by default. This device will still exhaust entropy, but rather than block when there is no entropy estimate, it will use a PRNG to generate the rest of the data. Keep the entropy pool filled, and the PRNG will never been utilized.

The /dev/random device on FreeBSD is a strict PRNG, and not a TRNG. As you mentioned, it uses the Yarow algorithm to generate random numbers. However, after generating 2^256 numbers, the cycle repeats, generating the exact same numbers in the exact same order as before, as is the case with all PRNGs. By comparison, /dev/urandom in the Linux kernel uses SHA1, which will generate a total of 2^160 numbers before recycling when the entropy pool is exhausted.

Now, to address your question. You mentioned that you are using haveged(8) with Linux to keep the entropy reserve full. On most systems, haveged(8) is generating about 1 MBps of true random data to the kernel, and using the ioctl() system call to increase the entropy counter. As such, generating your 4096-bit RSA OpenSSL key will have the highest quality of cryptographic strength you could ask for, and adding additional random data from other systems won't increase its strength.

The whole point of the certificate generation is to create random numbers from either your file, or from a random number generator, as noise input into an algorithm to generate very large primes. The trick is getting to those prime numbers. With an entropy pool of 4096 bits, at full up, all the time, as haveged(8) will be providing, the ability of someone to regenerate the same primes, even if they have full knowledge of your system, is highly improbable.

So, in my opinion, run haveged(8) keeping the kernel topped at 4096-bits of entropy, and just generate your certificate there using "-rand /dev/random", rather than generating a file from every system.

Aaron Toponce
  • 246
  • 2
  • 12
1

The OpenSSL Wiki and source code both point to the random seed function only using the first 32 bytes (256 bits) supplied to seed the cryptographically strong PRNG. Any additional bytes get discarded. That being said, good practice would be to seed with with the full 32 bytes and make them as "random" as possible. Additionally, the the function should be force re-seeded or called again fresh with a new seed for each new key or random number being generated to guarantee that the same stream from the CS-PRNG isn't being used for more than one key. If you have a lot of data you believe is random and really feel you must incorporate all of it, then use a hash function to generate the 32 byte seed (however, then you place trust in the hash function not to have weaknesses (but this is probably one of the lesser worries). Summary: Generate the best 32 byte seeds you can and reseed often.

Bryan D.
  • 31
  • 2