2

A language $L$ is in BPP if there exists a randomised TM such that it outputs a correct answer with probability at least $1/2+1/p(n)$ for some polynomial $p(n)$, where $n$ is the length of the input. This probability can be amplified to $1-2^{q(n)}$, for some polynomial $q(n)$ by repeating the algorithm polynomially many times and taking the majority.

I was wondering if it is necessary to have this bound around the constant $1/2$? Can we have a randomised algorithm that answers correctly with probability $c+1/p(n)$ for some $c<1/2$ and still amplify the probability in polynomial time?

The proof for the case of $1/2 + 1/p(n)$ uses Chernoff bound on lower tail that requires $0 < \delta <1$. $\delta= 1-1/2p$ in that case which means $p$ should be greater than $1/2$. Proof here.

However here is a proof that weak BPP = Strong BPP where strong BPP is BPP as we know it and weak BPP is when if $x\in L,$ $P(TM\ accepts\ x) \geq s(n)+1/p(n)$, and if $x \not\in L, P(TM\ accepts\ x ) \leq s(n)$, where $p(n)$ is any polynomial and $s(n)$ is any polynomial time computable function.

e_noether
  • 1,329
  • 2
  • 13
  • 19

1 Answers1

2

If $c < 1/2$ then for any problem there is an algorithm that answers correctly with probability at least $c+1/n$, say. For small $n$, the algorithm just outputs the hardwired correct answer. When $n$ is large enough so that $c + 1/n \leq 1/2$, the algorithm just tosses a coin.

What went wrong? For BPP amplification to work, we need a gap between the promise for a Yes instance and the promise for a No instance. In your definition of "weak BPP" (not a standard term), you are given an algorithm such that:

  • On a Yes instance, accepts with probability at least $s(n) + 1/p(n)$.
  • On a No instance, accepts with probability at most $s(n)$.

In contrast, in your suggested definition, we have the following promise:

  • On a Yes instance, accepts with probability at least $c + 1/p(n)$.
  • On a No instance, accepts with probability at most $1 - c - 1/p(n)$.

If $c < 1/2$ then for large enough $n$, the two acceptance intervals overlap (at acceptance probability $1/2$), and so the definition is meaningless. When $c = 1/2$, there is an inverse polynomial gap, which can be amplified to a constant gap, to match the standard definition of BPP.

Yuval Filmus
  • 280,205
  • 27
  • 317
  • 514