14

I'm currently learning private-key cryptography. I've been able to see that perfect secrecy is achievable if no assumption is made about the computational power of the attacker.

However, perfect secrecy is quite heavy to use, so we relax our assumptions to achieve the so-called computational secrecy, by assuming that the computational power of the adversary is bounded.

Then they speak about "efficient adversaries". They are defined as "running in polynomial time". I'm sorry, I know what asymptotic time-complexity is, but here I have no idea of what it means.

I'm confused because there seem to be a restriction. Security is said to be preserved only against efficient adversaries, that run in a feasible amount of time. In other words, they seem to say that polynomial time is a limitation on the adversary, and I don't understand that because to me it is the best complexity that can be achieved by an algorithm. If we are protected against polynomial adversaries, shouldn't we be protected against ALL adversaries ?

So maybe I'm missing something... Can someone explain what an efficient adversary really is ?

B-Con
  • 6,196
  • 1
  • 31
  • 45
Backslash36
  • 241
  • 2
  • 6

1 Answers1

14

Perfect secrecy is achievable in a few cases, such as one-time pads, and, well, that's pretty much it. Most cryptographic protocols are vulnerable to an all-powerful, all-knowing attacker. If you do not put any restriction on what the attacker can do, then

  1. Guess the key.
  2. Profit.

breaks almost any cryptography, as does

  1. Wave a magic wand.
  2. Profit.

So at the very least we must restrict our model attackers who can do things we haven't even imagined. Cryptography also doesn't aim to protect against lead pipe cryptanalysis. So we assume that attackers are only allowed to do things “inside the system”. Based on Turing's thesis, we assume that the attacker can only compute computable functions, and that the function computed by the attacker does not depend on the key. (It's ok if the attacker can make random choices too, as long as their probability isn't influenced by the key.)

Even a deterministic attacker who does not know the key can still break most cryptographic protocols with enough effort. All he has to do after all is enumerate all possible keys. But for sufficiently large key sizes, this should be unrealistic: as a rule we aren't interested in attacks that would take more than the lifetime of the universe.

If you want to analyse the resistance of an algorithm very precisely, you would need to know exactly how powerful the attacker's computer is, and study all possible ways in which he can make the computation required by the attack to find which one is the fastest (or more generally the cheapest). This is impractical. To start with, we look at the asymptotic behavior of the attacks: the way the attacks become harder as the parameters (such as key size) become larger.

In computational complexity, there is a nice break between polynomial and super-polynomial algorithms. This break fits cryptography well. The usual methods of doing realistic stuff tend to be polynomial, whereas enumerating all keys is exponential in the size of the key (or more precisely in the strength of the key). So restricting to polynomial attackers combines two nice properties:

  • It is good for getting theoretical results.
  • It does a good job of including realistic attackers and excluding unrealistic ones.