4

I'm curious about two things.

  1. When we define the class called "probabilistic polynomial-time algorithm" in computer science, does it include polynomial-time algorithm with exponential space? For example, when algorithm is considered to be given a input from domain $\{0,1\}^n$, what if the algorithm internally queries its exponential sized table (ex. $0^n\to0,0^{n-1}1\to1$ and so on..) and outputs the result? Does it still polynomial-time algorithm?

  2. In theoretical cryptography, one-way function $f:\{0,1\}^*\to\{0,1\}^*$ has a requirement, which is related with hard-to-invert property, as following block. If the answer to above question is yes, is it possible to construct algorithm $A'$ to simulate exactly same as $f$ for every value in $\{0,1\}^n$ using exponential table as described in above question? If then, it implies that it's impossible to design one-way function which is definitely not true. So what have i missed?

    For every probabilistic polynomial-time algorithm $A'$, every positive polynomial $p(\cdot)$, and all sufficiently large $n$'s,

    $Pr[A'(f(U_n),1^n)\in f^{-1}(f(U_n))]<\frac{1}{p(n)}$

    where $U_n$ is random variable uniformly distributed over $\{0,1\}^n$

Raphael
  • 73,212
  • 30
  • 182
  • 400
euna
  • 105
  • 5

2 Answers2

10

Regarding your first question, what you're missing is where your "exponential table" comes from. Your algorithm has a finite description and should work for every $n$. So it cannot explicitly contain the $n$-table for all $n$. It could contain instructions for computing the table, but then it would have to first execute them, and constructing an exponential size table takes exponential time.

On the other hand, your program could use a (supposedly) exponential amount of uninitialized space. Since its running time is polynomial, it only accesses polynomially many cells. You can implement memory access in such a way that if $T$ cells are ever accessed then $\tilde{O}(T)$ memory is used (exercise). The corresponding running time might become much worse, but still polynomial.

A third possibility are non-uniform computation models, which are popular in cryptography. Here the idea is that the algorithm is allowed to use a certain hard-coded amount of data which depends only on $n$. However, this data has to be polynomial size. This restriction comes from the interpretation of the model in terms of circuits. A machine running in polynomial time corresponds to circuits for every $n$ of polynomial size. If we now relax the constraint that all these circuits come from some single algorithm, then we get non-uniform polynomial time, which is the same computation with advice depending on $n$ of polynomial size (exercise).

The answer to the first question should obviate the second one. I would just mention that sometimes, instead of probabilistic polynomial-time algorithms, one considers (deterministic) polynomial size circuits.

Yuval Filmus
  • 280,205
  • 27
  • 317
  • 514
-1
  1. Nope. In the standard framework in the cryptographic community, that kind of algorithm is usually not considered probabilistic polynomial-time: if necessary, we tweak the definition to ensure that it's not.

    Technically speaking, in cryptography some folks have suggested we use the sum of the running time plus the size of the program's code as our measure of the time for the attack (you can think of it like this: if the code of the program is $k$ bits, it will take $k$ time steps just to read that code into memory before you start running it). If you hear someone say "running time" in cryptography, they might actually mean that sum. I think I first learned this technical detail from Phil Rogaway, but I have no idea whether he is first.

    If you don't believe me, here's an example citation to the literature: you can look at Bellare et al, The security of the cipher block chaining message authentication code, Journal of Computer and System Sciences. See Section 2.2, Model of computation, where they write:

    We fix some particular Random Access Machine (RAM) as a model of computation. [...] When we speak of A's running time this will include A's actual execution time plus the length of A's description (meaning the length of the RAM program that describes A). This convention eliminates pathologies caused if one can embed arbitrarily large lookup tables in A's description.

    Notice how they anticipated this corner case and chose definitions that avoid it from being problematic. With this definition, everything works out in a reasonable way, and your proposed algorithm (with an exponential-sized constant) is not a probabilistic polynomial-time algorithm.

    Many cryptographers aren't always this careful about their definition of running time. It is very common to be a little sloppy and omit this kind of detail. Often, it doesn't matter. But when we start to talk about exponential-sized constants, this detail does matter, very much, and that's when it is essential to know about this technical detail, if you want to argue over formalisms.

  2. Nope.

By the way, I stand by my answer on this, despite the (unexplained) downvote. Many folks are not aware that others in the research community have already anticipated this kind of subtlety, and few papers make a big deal of it, so I don't blame others for being unaware of the relevant definitions in the cryptographic community.

D.W.
  • 167,959
  • 22
  • 232
  • 500