3

Consider a polyhedron specified as following set of equalities and inequalities $$ \begin{aligned} &\mathbf{A}\mathbf{x} = \mathbf{b},\\ &\mathbf{x} \geqslant \mathbf{0}. \end{aligned} $$ Are there ways to check if the set is actually empty? Maybe some reasonable heuristics are available? Finally, if the polyhedron is not empty, how do I find a point with the smallest number of non-zero components?

For my particular problem the number of columns in A is significantly larger than the number of rows.

RobPratt
  • 50,938
uranix
  • 7,773
  • 1
  • 22
  • 57
  • More rows than columns? You have an overdetermined set of linear equations then. The first thing you should do is see if you can even solve the equations. – Michael Grant May 04 '15 at 01:35
  • Sorry for confusion, the system is underdetermined and has full row rank. – uranix May 04 '15 at 05:41
  • It's still not clear what is "special" about this question. This is just the constraint set for a linear program in standard form. Methods for determining its feasibility are well known. – Michael Grant May 04 '15 at 12:25

2 Answers2

4

Unfortunately, determining a solution with the smallest number of non-zeros is intractable. It can be expressed as the following binary linear program: \begin{array}{ll} \text{minimize}_{x,y} & \sum_i y_i \\ \text{subject to} & A x = b \\ & 0 \leq x \leq M y \\ & y \in \{0,1\}^n \end{array} where $M$ is a large number known to bound the largest feasible values of $x$. A common heuristic is to solve \begin{array}{ll} \text{minimize}_{x,y} & \sum_i x_i \\ \text{subject to} & A x = b \\ & x \geq 0 \\ \end{array} This will tend to produce a solution with many zero entries, but without a guarantee that it is truly the minimum. There are a variety of other heuristics one can employ. For instance, some people employ iterative reweighting schemes, which involve solving a sequence of problems of the form \begin{array}{ll} \text{minimize}_{x,y} & \sum_i d_i^{(k)} x_i \\ \text{subject to} & A x = b \\ & x \geq 0 \\ \end{array} The first iteration uses $d_i^{(1)}\equiv 1$; i.e., the same problem above. This produces a solution $x^{(1)}$. For each subsequent iteration, you choose $$d^{(k+1)}_i = 1 / (x_i^{(k)} + \epsilon)$$ where $\epsilon$ is small. This puts extra weight on small values of $x$ to drive more of them to zero.

Another approach is a homotopy method, for instance \begin{array}{ll} \text{minimize}_{x,y} & \sum_i x_i^{p_k} \\ \text{subject to} & A x = b \\ & x \geq 0 \\ \end{array} For $k=1$, you choose $p_1=1$; i.e., the original linear program. Then you solve a sequence of problems with $p_k\rightarrow 0$ using the previous solution as an initial point for the next. For $p_k<1$, this is non-convex, so there is no guarantee that the solution is global. I personally like iterative reweighting better.

Michael Grant
  • 20,110
0

This problem can be solved with use of an auxillary problem of the form $$ \operatorname{minimize} p = \sum_i y_i\\ \operatorname{s.t.} \mathbf{Ax} + \mathbf{Ey} = \mathbf{b}\\ \mathbf{x} \geqslant 0\\ \mathbf{y} \geqslant 0. $$ Assuming $\mathbf{b} \geqslant 0$ (can be achieved by multiplying rows of $\mathbf{Ax}=\mathbf{b}$ by $\operatorname{sgn} \mathbf{b}$), the basic feasible solution for this problem is $\mathbf{x} = \mathbf{0}, \mathbf{y} = \mathbf{b}$. If the optimum solution to the auxillary problem is $p = 0$, than $\mathbf{x}$ is a feasible point for original problem, otherwise there's no such point.

uranix
  • 7,773
  • 1
  • 22
  • 57