10

Given $n$ boolean variables $x_1,\ldots,x_n$ each of which is assigned a positive cost $c_1,\ldots,c_n\in\mathbb{Z}_{>0}$ and a boolean function $f$ on these variables given in the form $$f(x_1,\ldots,x_n)=\bigwedge_{i=1}^k\bigoplus_{j=1}^{l_i}x_{r_{ij}}$$ ($\oplus$ denoting XOR) with $k\in\mathbb{Z}_{>0}$, integers $1\leq l_i\leq n$ and $1\leq r_{i1}<\cdots<r_{il_i}\leq n$ for all $i=1,\ldots,k$, $j=1,\ldots,l_i$, the problem is to find an assignment of minimum cost for $x_1,\ldots,x_n$ that satisfies $f$, if such an assignment exists. The cost of an assignment is simply given by $$\sum_{\substack{i\in\{1,\ldots,n\}\\x_i\,\text{true}}}c_i.$$ Is this problem NP-hard, that is to say, is the accompanying decision problem "Is there a satisfying assignment of cost at most some value $K$" NP-hard?

Now, the standard XOR-SAT problem is in P, for it maps directly to the question of solvability of a system of linear equations over $\mathbb{F}_2$ (see, e. g., https://en.wikipedia.org/wiki/Boolean_satisfiability_problem#XOR-satisfiability). The result of this solution (if it exists) is an affine subspace of $\mathbb{F}_2^n$. The problem is thus reduced to pick the element corresponding with minimal cost from that subspace. Alas, that subspace may be quite large, and indeed, rewriting $f$ in binary $k\times n$-matrix form, with a $1$ for each $x_{r_{ij}}$ at the $i$-th row and the $r_{ij}$-th column, and zero otherwise, we get a cost minimization problem subject to $$Ax=1,$$ where $A$ is said matrix, $x$ is the column vector consisting of the $x_1,\ldots,x_n$ and $1$ is the all-1-vector. This is an instance of a binary linear programming problem, which are known to be NP-hard in general. So the question is, is it NP-hard in this particular instance as well?

1 Answers1

13

A classical result of Berlekamp, McEliece, and van Tilborg shows that the following problem, maximum likelihood decoding, is NP-complete: given a matrix $A$ and a vector $b$ over $\mathbb{F}_2$, and an integer $w$, determine whether there is a solution to $Ax = b$ with Hamming weight at most $w$.

You can reduce this problem to your problem. The system $Ax = b$ is equivalent to the conjunction of equations of the form $x_{i_1} \oplus \cdots \oplus x_{i_m} = \beta$. If $\beta = 1$, this equation is already of the correct form. If $\beta = 0$ then we XOR an extra variable $y$ to the right-hand side, and then we force this variable to be $1$ by adding an extra equation $y = 1$. We define the weights as follows: $y$ has weight $0$, and the $x_1$ have weight $1$. We have now reached an equivalent formulation of maximum likelihood decoding which is an instance of your problem.

Yuval Filmus
  • 280,205
  • 27
  • 317
  • 514