14

Does coNP-completeness imply NP-hardness? In particular, I have a problem that I have shown to be coNP-complete. Can I claim that it is NP-hard? I realize that I can claim coNP-hardness, but I am not sure if that terminology is standard.

I am comfortable with the claim that if an NP-complete problem belonged to coNP, then NP=coNP. However, these lecture notes state that if an NP-hard problem belongs to coNP, then NP=coNP. This would then suggest that I cannot claim that my problem is NP-hard (or that I have proven coNP=NP, which I highly doubt).

Perhaps, there is something wrong with my thinking. My thought is that a coNP-complete problem is NP-hard because:

  1. every problem in NP can be reduced to its complement, which will belong to coNP.
  2. the complement problem in coNP reduces to my coNP-complete problem.
  3. thus we have a reduction from every problem in NP to my coNP-complete, so my problem is NP-hard.
Austin Buchanan
  • 679
  • 6
  • 15

2 Answers2

11

You claim that every problem in NP can be reduced to its complement, and this is true for Turing reductions, but (probably) not for many-one reductions. A many-one reduction from $L_1$ to $L_2$ is a polytime function $f$ such that for all $x$, $x \in L_1$ iff $f(x) \in L_2$.

If some problem $L$ in coNP were NP-hard, then for any language $M \in NP$ there would be a polytime function $f$ such that for all $x$, $x \in M$ iff $f(x) \in L$. Since $L$ is in coNP, this gives a coNP algorithm for $M$, showing that NP$\subseteq$coNP, and so NP$=$coNP. Most researchers don't expect this to be the case, and so problems in coNP are probably not NP-hard.

The reason we use Karp reductions rather than Turing reductions is so that we can distinguish between NP-hard and coNP-hard problems. See this answer for more details (Turing reductions are called Cook reductions in that answer).

Finally, coNP-hard and coNP-complete are both standard terminology, and you are free to use them.

Yuval Filmus
  • 280,205
  • 27
  • 317
  • 514
6

The problem with that line of reasoning is the first step. In the deterministic case, you can decide $x \in L$ with a TM $\text{M}$ iff you can decide $x \notin \overline{L}$ with it, because the way to do it is just flip the output bit of $\text{M}$ since its output only depends on $x$ (if we compare with the verifier definition of $NP$).

In the nondeterministic case using the verifier definition, it's not known whether you can build an $\text{NP}$-verifier from a $\text{coNP}$-verifier or vice versa, and the problem is that they have different quantifiers in the definitions that the verifier machines must fulfill. Let $L \in \text{coNP}$, then we have a verifier DTM $\text{M}$ such that:

$$x \in L \iff \forall z \in \{0,1\}^{p(|x|)}:\text{M}(x,z) = 1$$

For $\overline{L}$, the verifier $\text{M'}$ will have to fulfill

$$x \in \overline{L} \iff \exists z \in \{0,1\}^{q(|x|)}:\text{M'}(x,z) = 1$$

Why can't we then just use the $\text{NP}$-verifier $\text{M'}$ of the language $\text{K}$ to build a $\text{coNP}$-verifier $\text{M}$ for $\text{K}$? The problem is the $\forall$-quantifier required to have a $\text{coNP}$-verifier. The $\text{NP}$-verifier $\text{M'}$ may give you $0$ for some (wrong) certificate even for $x \in \text{K}$, so you can't go from $\exists$ to $\forall$.

Maybe more abstractly: it's not clear how to build (in polynomial time) a machine that recognizes exactly the elements of a language, regardless of what certificate come with them, from a machine that recognizes exactly the elements of a language that have some certificate for it, but for which also some certificates don't work.

G. Bach
  • 2,019
  • 1
  • 17
  • 27