10

As you know in the past few weeks it has emerged that NIST/NSA have been involved in weakening encryption standards over a long period of time so that they can retain the ability to break encryption used by corporations and the general public. We've already seen evidence of that with the Dual_EC_DRBG which has been recently disavowed by NIST after public outcry. That's probably a red herring, I think there's another smoking gun here.

I want to highlight a particular issue in the Advanced Encryption Standard process. The Rijndael algorithm had the option of block sizes of 128, 192 or 256 bits to match the key sizes. When that algorithm was decided as the winner, NIST went away and then defined a "restricted" block size of 128 bits for AES. Apparently that was what the spec called for in the first place.

However that doesn't make sense. If you were concerned about security, why would you have a 256 bit key size, but not have a matching block size? It would appear to be more secure to keep them at the same length. For example, a modern cipher like Threefish which has key sizes of 256, 512 or 1024 bits, also has matching block sizes of 256, 512 or 1024 bits.

  • Can the overall security of AES with a 256 bit key be reduced from 2256 to 2128 by attacking the lower block size?

  • Could that be combined that with various other attacks on AES which would reduce the 2128 bit security to even less e.g. 2100 bits?

  • Now that commercial quantum computers are viable (just one D-Wave Vesuvius is ranked in the top 10 of the TOP500 list), and assuming the NSA have taken advantage of that and constructed a quantum cluster in the basement of Fort Meade/Utah data center, if they employed Grover's algorithm which runs in O(N1/2) time, would that reduce the security of AES to 264 for all key sizes or worse 250 with attacks?

Extra links:

https://en.wikipedia.org/wiki/Advanced_Encryption_Standard
https://en.wikipedia.org/wiki/Dual_EC_DRBG#Controversy
http://www.gizmag.com/d-wave-quantum-computer-supercomputer-ranking/27476/
https://www.scientificamerican.com/article.cfm?id=google-nasa-snap-up-quantum-computer-dwave-two
https://dwave.wordpress.com/2011/12/01/vesuvius-a-closer-look-512-qubit-processor-gallery/
http://www.wired.com/wiredenterprise/2013/06/d-wave-quantum-computer-usc/
http://arstechnica.com/security/2013/09/nsas-pipe-dream-weakening-crypto-will-only-help-the-good-guys/
https://en.wikipedia.org/wiki/Grover%27s_algorithm
https://en.wikipedia.org/wiki/Threefish    
Gabriel
  • 109
  • 1
  • 5

2 Answers2

13

Security issues related to block size boil down to the following: a pseudorandom permutation is not a pseudorandom function, and the difference becomes visible when you query the function too many times. Imagine a function which accepts as inputs, and offers as outputs, elements from a set of size $N$. For instance, the inputs and outputs are blocks of $n$ bits, so $N = 2^n$. If you feed this function with different input values, then you will first get as many different output values. However, with a PRF, you expect to obtain a collision at some point: a new input which yields the same output as a previous distinct input. On average, this should occur after about $\sqrt{N}$ queries. With a PRP, you will never get a collision, because permutations are, by definition, injective.

So when encrypting with a block cipher, using blocks of $n$ bits, you begin to have problems when processing more than $2^{n/2}$ blocks with the same key. 3DES has 64-bit blocks, so encrypting more than $2^{32}$ 8-byte blocks with 3DES can be troublesome: that's 34 gigabytes or so, a value which is not so huge nowadays. To avoid this problem, NIST launched an open competition for an "Advanced Encryption Standard" (the AES) with the following rules:

  • 128-bit blocks.
  • Keys of 128, 192 and 256 bits.
  • Not slower than 3DES (many candidates turned out to be widely faster).

And that's what they defined. They did not really care that the selected candidate (Rijndael) had extended versions which could accept other block sizes; 128 bits were sufficient.

In all of the above, the important point is that the $2^{n/2}$ figure is related to the number of queries. It is generally estimated that, for an attacker, obtaining $x$ pairs plaintext/ciphertext is widely more difficult than trying out $x$ potential keys, since the former requires cooperation of the target system, while the latter requires only some computing power on the attacker's own machines. You'll have a hard time finding an "honest system" which needs to encrypt $2^{68}$ bytes with AES (that's close to 300 billions of gigabytes), and even more so an honest system where the attacker can get 300 billions of gigabytes of known plaintext while still being "locked out" (if the attacker can get the honest system to process that much data, then what left is there to attack ?).

Basically, the block size and the key size work on different scales, and it just happens, fortuitously, that "128 bits" is a nice level for keys (exhaustive search on $2^{128}$ keys is not feasible with current technology), and also a nice level for block size ($2^{64}$ blocks of 16 bytes of known plaintext is not a realistic attack scenario with current technology).

Notably, if a quantum computer ever sees the light, that computer would be able to break a 128-bit key in $2^{64}$ effort, but it would do absolutely nothing about the issues related to block size: 128-bit blocks would still be as good as they are today.


All of the above is for usage of a block cipher for encryption. When a block cipher is to be reused as a core element for something else, e.g. a hash function, then other constraints may apply. Typically, for the Skein hash function, the internal block cipher had to offer much larger blocks if the hash function was to live up to its promises (note that a hash function has no key, so we are far from the attack model of an encryption system). This is why Threefish, the block cipher in Skein, has big blocks. This is not because big blocks are good for encryption; this is because big blocks are necessary when a block cipher is turned into a hash function the way Skein does it.

A similar case is Whirlpool, which uses the "W" block cipher, which is a Rijndael derivative (with even larger blocks -- 512 bits -- and a revamped key schedule).


As for D-Wave systems, they are not "true" quantum computers; they compute a specific problem instance in a "somewhat quantumish way", but are not able to apply the nifty algorithms which eat crypto for breakfast. The difference is qualitative, not a question of mere tuning. See this for pointers.

Thomas Pornin
  • 88,324
  • 16
  • 246
  • 315
2

There is an issue with the key schedule needed to take take a 256 bit key and use it for a 128 bit block. It turns out that the key schedule for 256-bit AES keys was not as well worked out as for 128 bit keys. Whether this problem is a consequence of the mismatch between key size and block size isn't something I know.

Flaws in the 256-bit key schedule make it vulnerable to related key attacks (not something that should happen in well designed applications) and the "weakening" to date doesn't bring it below 128 bits. But this does have some people thinking that it is actually better to stay with 128-bit AES keys than move to larger ones:

for new applications I suggest that people don't use AES-256. AES-128 provides more than enough security margin for the forseeable future. But if you're already using AES-256, there's no reason to change.

I'm not sure I agree, but that is a bit of a digression from your principle question. To the extent that the problems with the 256 bit key schedule can be attributed to the 128 bit block size, then we have a problem because of that block size choice.

Jeffrey Goldberg
  • 334
  • 1
  • 10