77

In the media, I sometimes read about "backdoors" in encryption algorithms. I'd like to understand what such a backdoor actually consists of. Is it:

a) a hidden weakness in the math formulas that can cause security to be broken by brute force in a reasonable amount of time rather than the universe's expected life span?

or b) a plain hole that makes it possible for a knowing person to extract information without any brute force attack at all; it's a way "straight in"!

I don't know if it's possible to answer this without going into complex math, but I hope the answer can be kept in layman's terms as much as possible. Creating such a backdoor would involve the challenge of making it almost impossible to find. Moreover, if found, its creator could deny malicious intent and say it was just an honest mistake. Information about how this works and would play out in practice would be interesting as well!

Patriot
  • 3,162
  • 3
  • 20
  • 66

8 Answers8

75

There are two somewhat orthogonal concepts in backdooring encryption algorithms:

  1. The backdoor can be explicit or implicit. An explicit backdoor is one that everybody knows it is there. An implicit backdoor strives at remaining undetected by the algorithm owners. Of course, when there is an explicit backdoor, people tend to avoid the algorithm altogether, so explicit backdoors may hope to work only in the presence of a legal framework that forces implementers to use the backdoored system.

    An example of an explicit backdoor is the Clipper chip (which was ultimately abandoned). The backdoor is not really in the algorithm, more in the assembly of algorithms into a protocol, and technically it was an automatic key escrowing method. For an implicit backdoor, see the Dual EC DRBG as a famous example: it worked only as long as nobody was aware that it was backdoored.

  2. The backdoor's security may be quantifiable, or not. In the case of Dual EC DRBG, the mechanism uses well-trodden mathematical paths: the NSA knew that exploiting the backdoor required knowledge of an internal secret key, based on discrete logarithm (on elliptic curves).

    A non-quantifiable security is what happens when you try to push, for instance, a voluntarily flawed algorithm, or one for which you know of a cryptanalytic method that you did not publish. This is a very dangerous game for a spy agency, because you cannot really know whether third parties could find the flaw or not. Such backdooring tends to backfire in the long term.

    Interestingly, the NSA tends not to use non-quantifiable backdoors. A good example is DES. At the time it was designed, the NSA believed that it could tackle an upfront 256 exhaustive search, and that nobody else (in particular the Soviets) had the technology and budget for that. The NSA also knew of a novel cryptanalytic method (differential cryptanalysis). So when the NSA intervened in the design of DES, it insisted on shortening the key from 64 to 56 bits (that's addition of a quantifiable backdoor, and it was rather obvious, so quasi-explicit), and also on strengthening the design against differential cryptanalysis. This is a good example about how NSA consciously refrained from a non-quantifiable backdoor. There is only one thing that spy agencies fear more than not being able to spy: it is the idea that other competing spy agencies may also spy.

So a real, good backdoor is one that uses maths to offer quantifiable security against unauthorized usage of the backdoor. It is extremely difficult to have quantifiable security without making the backdoor explicit. The "best in class" in that area is Dual EC DRBG, and even before the Snowden business, cryptographers were already finding it weird and strongly suspected foul play (see the analyses back in 2007, 6 years before Snowden).

Thomas Pornin
  • 88,324
  • 16
  • 246
  • 315
16

Both of your formulations for encryption backdoors are valid. However, a more efficient way and harder to detect method consist in biaising the random generators used to generate private and public keys (known example). The idea being, if you can predict the random generator output, therefore you can trivially generated the same private/public keys, and then decrypt any message like you were the legitimate owner.

Keeping the backdoor hidden is probably easier (!!take this assumption with extreme caution!!) with this method, as analysis of random number generator are extremely complex and costly. Cryptographers here might answer this better than me.

M'vy
  • 376
  • 5
  • 14
11

A definition of encryption back doors for those who do not understand encryption. Remember the Battle of Helm's Deep from Lord of the Rings? The big fortress surrounded by a high wall with only one way in?

The fortress, the Hornburg, is split into two stages. The Keep is an immensely high structure, accessible from the outside only through a long stone causeway with no railings. Thick wooden gate, high battlements, designed for defence. In-between the outer and inner walls is a little stepped road that curves into the courtyard. This leads to the main hall, the last defensible part of the fortress. This also leads to the caves. The whole thing is set up so an attacker has to face a gauntlet of defenders, with space and holes for archers on all sides.

The only weakness? The proverbial encryption back door? The waste water gate - the only weakness in the entire structure, that the army of orcs blew up to gain entrance to the fortress.

encryption back door

Mike Edward Moras
  • 18,161
  • 12
  • 87
  • 240
kalina
  • 227
  • 3
  • 7
9

The "exceptional access" thing that law enforcement keeps asking for is best thought of as a master key. You know how in large office buildings, most of the people who work there have keys that only open a few doors, but the janitorial staff and the building management can open all the doors? It would work exactly like that, and it would have exactly the same negative side effects:

  • You have to trust everyone who has a master key to obey the rules for when they're allowed to use it and what they're allowed to do while inside. The movie plot where someone gets a job as a janitor in order to sneak in somewhere they're not supposed to be — that really happens.
  • The master key is extremely valuable. That makes it a target for theft, and makes everyone who can use it a target for extortion.
  • If the master key does get stolen, you have to change all the locks.
  • The locks are more complicated, which probably makes them easier to pick.

These problems are far more severe for cryptosystems than they are for office buildings. This is fundamentally because of scale. The master key for an office building only unlocks one office building, and there are a finite number of copies and keeping track of them is a well-understood physical security problem. In contrast, suppose the Clipper chip had been adopted throughout the USA. Then there would have been a single master key that would have decrypted every phone conversation in the USA. (If I remember correctly, anyway. It's been a while since I had to know exactly how Clipper worked.) That key would have been a small quantity of data, easily stored on a single 3.5" floppy disk. There would have been no practical way to know how many copies existed. And, further suppose that Matt Blaze's demonstration of the insecurity of the system had been published after the chip had been widely adopted (analogous to someone publicizing an easy way to pick a type of master-key lock): then we'd have had to change all the phones. All ~180 million of them.

The 1997 paper "The Risks of Key Recovery, Key Escrow, and Trusted Third-Party Encryption" goes into much more detail about this. It is reasonably easy to read and doesn't require a lot of background knowledge.

As a final note, it's worth pointing out that the signing keys for OS updates are in fact exactly this kind of master key. We take the risk in that case because secure distribution of updates is so very valuable... but there are people whose entire job is to make sure nobody steals those keys. As I write this, Apple is making a big public and legal stink about being asked to push a custom update to one phone that would make it easier for the FBI to unlock it — probably more because of the precedent it would set, than because they care terribly much about protecting the privacy of one (late) customer.

zwol
  • 785
  • 4
  • 12
4

There are many things that could be considered as backdoors in encryption algorithms in articles in the media. These don't always agree with the more technical definitions of backdoors, but generally have the result of allowing someone without the password or key to get at the data being protected.

For example, in a mobile phone, having a secret PIN code which unlocked the device no matter what the user had set as a PIN would be a backdoor. This allows for any user with that secret code to unlock the device, without needing to know the original PIN. A system which brute forces the PIN is not considered a backdoor - the end result is the same (the data is revealed), but there is no secret that enables it in a general sense. In terms of Apple, there is not currently a request for a backdoor into the PIN system. However, there is a request for a backdoor for the PIN protection system - it aims to make it possible to reveal the PIN, by bypassing the protection against bruteforcing. This would involve specific code which could then be used on other devices to enable brute force attacks against PINs.

There are also rumours about backdoors in encryption algorithms, usually linked with the NSA. In particular, there have been discussions of a weakness in the random number generator used by some forms of elliptic curve cryptography, which allows for agencies or individuals knowing the weakness to reverse data encrypted using it. There are details on this, which include maths, online. Again, this single vulnerability applies to a wide range of systems, without requiring the user to break specific keys or passwords. It also has the interesting property of being unprovable, except by the original creator, since it relies on one of the same properties as public key cryptography. This would be a backdoor - it is hidden (in fact, you can't prove it exists except by inference), and once you have the knowledge to use it, you can repeat this trick anywhere. It doesn't matter what additional information the end user has provided (usually passwords or keys) - that never affects your ability to access the data protected.

Matthew
  • 201
  • 3
  • 8
3

the general idea is to create an encryption algorithm that can be decrypted 2 different ways. one way (the front door), uses an encryption key (like a password), that the encryptor chooses. the other way (the back door), uses an encryption key that the algorithm designer chooses (or at least knows). you can think of the back door key as a master key that is built in to the algorithm, but not obvious to someone analyzing the algorithm. as such, anyone using the algorithm can pick their own secret key, then only people who know that key or the master key can decrypt the message.

to give a slightly more concrete, yet approachable example, you must understand the difference between symmetrical and asymmetrical encryption. symmetrical encryption uses the same key to encrypt and decrypt a message. asymmetrical encryption uses a pair of "matching" keys, one to encrypt, and the other to decrypt. with asymmetrical keys, you can tell everyone the encryption key, and they can encrypt messages that only you can decrypt. however, symmetrical encryption tends to be faster, so in practice, people usually use asymmetrical keys to encrypt a symmetrical key, then use the symmetrical key to encrypt the actual message. using this strategy, you could simply encrypt every message with a random symmetrical key, then attach a header to the message with a copy of the symmetrical key encrypted using your asymmetric encryption key and a copy of the symmetrical key encrypted using the "backdoor" asymmetric encryption key.

based on the above, i think it is fair to call a "back door" a hole rather than a weakness. you might argue that some of the cryptographic weaknesses promoted by the nsa could be called a "back door". but since it takes a billion dollar budget to leverage the weakness effectively, it is more akin to export controls on strong encryption rather than a back door. in either case, it is not typically effective to "hide" the back door. generally the academic community at least suspects the existence of a back door long before an encryption algorithm becomes a popular standard. it is merely the key to the back door that remains secret.

1

The name "backdoor" should be an intuitive analogy. You don't have to worry about breaching the security on the rest of the house if you can find and open the backdoor.

Sometimes the backdoor is hidden behind a bush or camouflaged into the wall, but if you hide across the road you might see someone using it.

Sometimes the backdoor is left unlocked, or the key is under the mat, or you know the homeowner leaves a copy with the neighbours, and they have an open window above a flat roof and a ladder in the garden.

OrangeDog
  • 111
  • 5
1

I think both of those apply, but isn't there a third category?

c) Not a hole per-se, but an additional key used for encryption along with the user-supplied passphrase, transferred to or held by the company (or a third party).

In terms of the ongoing Apple iOS encryption thing, it seemed to me that the requested approach to creating a back door would be to change how the device was encrypted in the first place, generating a random passphrase in addition to the one the user created, encrypting using both so either would work, and sending the generated passphrase back to Apple servers for safekeeping (encrypting during transit and wherever they store it, of course) and retrieval when an unlock was requested by law enforcement.

Obviously this would only be applicable after an iOS update and after the user unlocks their device, so it would be useless to gain access to the current phone in question since they're dead.

Would this be feasible, would this count as a back door, and from the experts here, what would be the non-obvious implications here, and how would this approach compare in practicality and security to any other options, if Apple was forced to add a backdoor mechanism?