Complexity leveraging is a technique that is generally used to prove adaptive security of a selectively secure scheme. Eg: We can prove adaptive security of Yao's garbling scheme using complexity leveraging. Many papers mention about complexity leveraging, but does not present in detail. I would be grateful if someone could explain complexity leveraging with a detailed example.
1 Answers
Complexity leveraging is a type of reduction where the "reduction algorithm" runs in a complexity class that is greater than the adversary. For example, one may construct a simulator that runs in time $n^{\log n}$ although the adversary is only polynomial time. When considering the simulation paradigm, this is not very satisfactory. For example, in zero-knowledge, the argument is that the adversary learns nothing since it can generate anything it sees by running the simulator itself. However, when using complexity leveraging, the adversary actually cannot run the simulator (since it is in a lower complexity class). Nevertheless, in some cases, it can be meaningful. For example, let's consider a security notion that is "game-based" or indistinguishability based. We then prove that if an adversary running in polynomial-time can break our scheme, then there exists an adversary running in time say $n^{\log n}$ who can break a hash function (or anything else; this is just an example). Now, if the hash function is assumed to be secure for any adversary running in time $n^{\log n}$ then this is enough to guarantee the security of our scheme.
I am personally not a big fan of complexity leveraging. Beyond the definitional disadvantages in some cases (e.g., with simulation), there is also a big disadvantage in that it requires a much stronger assumption (not that a primitive cannot be broken by poly-time adversaries, but by super-poly-time adversaries for some specific function). However, it has been used successfully to achieve things that are otherwise unknown. In addition, in many cases, a first construction used complexity leveraging, and then later constructions succeeded in getting rid of the assumption.
========
Added later: The reason why stronger assumptions are needed is as follows. Assume that you have a reduction from ZK to OWFs so that any algorithm distinguishing the simulation from a real execution with non-negligible probability can invert the OWF with non-negligible probability. Now, assume that you use quasi-polynomial simulation. In order to argue that the adversary learns nothing, you need to assume that the one-way function is secure against quasi-polynomial inverters. This is because, in order to replace the real proof execution with a simulated one, you need to run in quasi-polynomial time. So, now you have a quasi-polynomial time adversary who is actually running. Then, the reduction to the OWF will only work if the adversary cannot break that OWF. (Of course, this is all handwaving, but this is what happens in proofs.)
- 1,960
- 4
- 23
- 45
- 28,270
- 1
- 69
- 86