6

If a cipher implementation passes unit tests using test vectors from some trusted source (these for AES, for example), then can we say that it is fully conforming to the specification, and must have been implemented correctly?

Is it possible to backdoor a cipher (or hash function, I suppose) in such a way that it still appears to be correct and is compatible with different implementations of the same cipher?

This question was motivated by the advice given by figlesquidge in his answer to the question AES column mixing and S-Box confusion, where he says:

Standard AES disclaimer: Given the questions you've asked, you should not implement AES yourself in a real-world system because there are lots of security considerations when implementing ciphers.

2 Answers2

11

can we say that it is fully conforming to the specification, and must have been implemented correctly?

No.

Is it possible to backdoor a cipher (or hash function, I suppose) in such a way that it still appears to be correct and is compatible with different implementations of the same cipher?

Certainly.

Say I have an function AES(k,m)=c where variables k and m are the key and the message respectively and c is the resulting ciphertext.

Now I could write my own AES as follows:


def mikeazo_AES(k,m):
    if k==0x12abed347816981812abed3478169818:
        return m
    else:
        return AES(k,m)

Thus, given a specific key, it will simply return the plaintext. Since that key is not part of the test vectors, they will all pass.

This will be compatible with all other implementations unless that key is hit. Since they are randomly generated, the probability of that happening is slim. This may not seem super useful at first glance, but as an attacker you could add in some sort of trigger mechanism to force keys to be the special key. You can presumably do this as you have already got your software on their system.

There are better back doors that could be inserted. See example in comments from CodesInChaos.

mikeazo
  • 39,117
  • 9
  • 118
  • 183
4

The answer is "no", in two ways.


First, the implementation of the algorithm could make use of side channels to leak data. The SSL timing attack permits an attacker who can execute multiple encryptions to "tease out" timing information that reveals bits of the key material. The original attack was based on the widely used OpenSSL implementation. Simply stated, if a particular key bit was 1, the encryption code would take one path through the code in a different amount of time than it would if the same bit was 0. By sending repeated requests for secure socket connections and timing the responses, the attackers were able to reconstruct the private key of the server.

Therefore, it would be possible for a malicious cryptographic implementation to deliberately include an asymmetric code path that would reveal the bits of the key via timing, not unlike Morse code. It would pass all test vectors perfectly, yet still be vulnerable. And not only would it be possible, but it would be easy. All a malfeasor would have to do is to implement an older version of the OpenSSL code, which has such a vulnerability already present. And he could mask that activity with feigned ignorance.


The other way a back door could get in is if the algorithm itself deliberately includes a back door, such as was suspected in the recent Dual EC DRBG random number generator scandal. Even a perfect implementation would yield a system that contains a back door.

There was long debate on whether or not the DES S-boxes contained such a back door, as the NSA had "strengthened" IBM's Lucifer algorithm back in the 1970s. It turned out that they actually did strengthen it against an attack (called differential cryptanalysis) that wasn't discovered until 1990 by civilian cryptographers Biham and Shamir. But an algorithm that incorporates deliberately weak S-boxes would also have a back door.

John Deters
  • 3,778
  • 16
  • 29