3

In this paper, Papadimitriou gives a proof of the fact that integer linear programming is in NP. There are some points that are obscure to me. Let me state the theorems involved:

Lemma 1: Let A be a nonsingular $m \times m$ integer matrix. Then the components of the solution of $Ax = b$ are all rationals with numerator and denominator bounded by $(ma)^m$ where $a = \max\{|a_{ij}|, |b_j|\}$.

Lemma 2 includes the following:

Lemma 2: Let $v_1,\ldots,v_k$ be $k > 0$ vectors in $\{0,\pm 1,\ldots,\pm a\}^m$ and let $M = (ma)^{m+1}$. Then the following are equivalent:

a) There exists $k$ reals $\alpha_1,\ldots,\alpha_k \ge 0$ not all zero such that $\sum \alpha_j v_j = 0$.

b) There exist $k$ integers $\alpha_1, \ldots, \alpha_k$, $0 \le \alpha_j \le M$ for $j = 1,\ldots,k$, not all zero such that $\sum \alpha_j v_j = 0$.

If I assume that I get a square matrix somehow restricting to the independent vectors in the collection of $v$'s then I could at best hope that if the solutions are of the form $p_i/q_i$ then normalising I get $|q_1 \ldots q_k p| \le (ma)^{m(k+1)}$. What am I missing?

Edit

Further details are found in Papadimitriou and Steiglitz's "Combinatorial optimization: Algorithms and Complexity"

The above combined with Matousek and Gärtner "Understanding and Using Linear Programming" leads to a smooth proof.

user1868607
  • 6,243

1 Answers1

2

If a) Lemma 2 holds true, you can suppose that $\alpha_k=-1$ and that the set $\{v_j\}_{j=1}^{k-1}$ is linearly independent.

It follows that the equation $$\alpha_1v_1+\cdots+\alpha_{k-1}v_{k-1}=v_k$$ has real solutions. After some elimination you find that an equation $Ax=b$, with $a_{ij}$ integer, $|a_{ij}|\leq a$, has only one solution $\alpha=(\alpha_1,\ldots,\alpha_{k-1})^T$.

Lemma 1 tell us that $\alpha_j=\frac{p_j}{q_j}$ is a rational number (an irreducible fraction), and gives the bound $((k-1)a)^{k-1}$ to $|p_j|$ and $|q_j|$. Thus you find that $$\beta _1v_1+\cdots+\beta_{k-1}v_{k-1}+\beta_kv_k=0,$$ with $\beta_k=-q_1q_2\cdots q_{k-1}$ and $\beta_j=p_j\prod_{i\neq j}q_i$, to $j<k$.

And we can choose $\beta_j$ such that $$|\beta_j|< ((k-1)a)^{(k-1)(k-1)}\leq (ma)^{m\times m}.$$

Remark:

  1. Note that if $|p_j|=((k-1)a)^{k-1}$ then $|q_j|<((k-1)a)^{k-1}$, for instance.

  2. If some $|\beta_j|= ((k-1)a)^{(k-1)(k-1)}$, then we have $|q_i|=((k-1)a)^{(k-1)}$, to at least $k-2$ index $i$, and the equation holds with small integers $\beta_l$. This suggests that a better bound to $\beta_i$ can be found with some combinatorial argument.

  3. Even the bound in Lemma 1 can be reduced to the optimal one $$\left(a\sqrt{m}\right)^m,$$ if you apply Hadamard's inequality (please see this thread) and Cramer's rule instead of Gaussian elimination. I think that this paper is helpful.

  4. It follows from Siegel's lemma that, if $k>m$ and $$\gamma_iv_1+\cdots+\gamma_kv_k=0, $$ then $\gamma_i$ is a rational number (an irreducible fraction), and $$|\gamma_k|\leq \left(\frac{\sqrt{\det(BB^T)}}{D}\right)^{\frac{1}{k-m}},$$ where $D$ is the greatest common divisor of the $m\times m$ minors of the matrix $B$, whose columns is $v_i$.

  5. Perhaps you find useful results searching for "Farkas lemma" on SearchOnMath.