29

I have the generic optimization problem:

$$ \max c^T|x|$$

$$ \text{s.t. } Ax \le b $$

$x$ is unrestricted

How do I convert it into a linear programming problem?

Online I read something about letting $x$ equal the difference of 2 positive numbers but I could not intuitively grasp why that worked. Plus the example applied only to minimization problems where the $c^T$ entries are all greater than $0$.

I'm sort of stuck

6 Answers6

22

I think the question you are trying to ask is this: If we have a set of linear constraints involving a variable $x$, how can we introduce $|x|$ (the absolute value of $x$) into the objective function?

Here is the trick. Add a constraint of the form $$t_1 - t_2 = x$$ where $t_i \ge 0$. The Simplex Algorithm will set $t_1 = x$ and $t_2 = 0$ if $x \ge 0$; otherwise, $t_1 = 0$ and $t_2 = -x$. So $t_1 + t_2 =|x|$ in either case.

On the face of it, this trick shouldn't work, because if we have $x = -3$, for example, there are seemingly many possibilities for $t_1$ and $t_2$ other than $t_1 = 0$ and $t_2 = 3$; for example, $t_1 = 1$ and $t_2 = 4$ seems to be a possibility. But the Simplex Algorithm will never choose one of these "bad" solutions because it always chooses a vertex of the feasible region, even if there are other possibilities.

EDIT added Mar 29, 2019

For this trick to work, the coefficient of the absolute value in the objective function must be positive and you must be minimizing, as in

min $2(t_1+t_2)+\dots$

or the coefficient can be negative if you are maximizing, as in

max $-2(t_1+t_2)+\dots$

Otherwise, you end up with an unbounded objective function, and the problem must be solved by other methods, e.g. mixed integer-linear programming.

(If I knew this before, I had forgotten. Thanks to Discretelizard for pointing this out to me.)

awkward
  • 15,626
  • This doesn't work with glpsol (from glpk) forced to use simplex algorithm. I give a shortened version because MathProg is too verbose: maximize abspos+absneg when y<=10 & x<=y & v=x-y & abspos-absneg=v & y>=0 & x>=0 & abspos>=0 & absneg>=0 It gives the error PROBLEM HAS UNBOUNDED SOLUTION. The problem that this translates: maximize |x-y| when y<=10 & x<=y & x,y >=0 is well defined. Thus the transformation doesn't work (or I did it wrong). – Eponymous Apr 22 '14 at 05:40
  • This approach makes more sense to me. – tuxdna Apr 02 '15 at 11:06
  • 1
    @Eponymous The reason your approach fails is because this technique only works for minimization problems: we have to do something different for maximization objectives. – Discrete lizard Mar 29 '19 at 09:54
  • 1
    What works for the maximisation case is to add the constraint $t_1+t_2=x$ and optimise $t_1-t_2$ – Discrete lizard Mar 29 '19 at 10:00
  • @Discretelizard The method outlined above should work for either minimization or maximization, no change needed. – awkward Mar 29 '19 at 12:51
  • I don't think so. If we maximize and $x\geq 0$, we can e.g. set $t_1=2x, t_2=x$ so that $t_1-t_2 =x$, but then $t_1+t_2=3x\neq |x|$. In fact, the problem becomes unbounded, as we can pick $t_1$ as large as we want (this explains the error message). Of course, that something similar works for max as well as min is no surprise, as we can always transform min problems into max problems and vice versa. But that does not mean that no transformation is needed. – Discrete lizard Mar 29 '19 at 13:46
  • @Discretelizard You can use the method, unchanged, for minimization problems. If it should happen that your problem is unbounded, there is no help for that. – awkward Mar 29 '19 at 13:54
  • 1
    @awkward It seems we're not understanding eachother. If your goal is to maximise $|x|$, and $x$ is in fact bounded, let's say it is a given constant, then the linear program $\max t_1+t_2$ given $t_1-t_2=x$ has an unbounded solution, for the reason I mentioned above. On the other hand, the solution to the linear program $\max t_1-t_2$ given $t_1+t_2=x$ does have $|x|$ as its solution. – Discrete lizard Mar 29 '19 at 13:58
  • @Discretelizard I agree, we are talking past each other. If your objective function is unbounded, the only way you can fix that is to change the problem. Changing the method of solution will not help. – awkward Mar 29 '19 at 14:05
  • @awkward Do you agree that the exact approach described in this answer does not work for maximization problems? – Discrete lizard Mar 29 '19 at 14:06
  • @Eponymous I didn't notice your comment until recently, or I would have replied earlier. Please see the edit added at the end of the answer. – awkward Mar 29 '19 at 17:54
8

I realize this is old, but I just ran into this issue. Please see: http://lpsolve.sourceforge.net/5.1/absolute.htm, which has a great explanation of the solution (both for minimization and maximization of an absolute value), and helped me a lot.

MineR
  • 193
5

I'm quite late to the party, but all current answers neglect an important point. The answers given here exposit tricks for converting an $L^1$-minimization problem into a linear program, introducing one or more auxiliary variables. If $A$ is square this is not the end of the world. The simplex algorithm runs in $O(n^3)$ time on an average case, so introducing an auxiliary variable multiplies the running time by 8. Bad but not horrible. However if your $L^1$ program is, for example

$$ \text{minimize} \hspace{0.5em} \lVert b - Ax \rVert_1 \hspace{0.5em} \text{s.t.} \hspace{0.5em} x \geq 0 $$

with $x \in \mathbb{R}^{100}$ and $A \in M_{64000 \times 100}(\mathbb{R})$ then by turning this into a linear program you've increased the dimensionality by a factor of (more than) 640, increasing the running time by a factor of more than a quarter-billion.

There are special algorithms for approximating $L^1$-minimization that have running times that depend on the dimensionality of the "intrinsic" variable, as opposed to the "trick" variable. Emmanuel Candés has one on his website; another is provided by the Python package CVXOPT.

5

A simpler way: if all $c_i$ are non negative, it is possible to reformulate the problem as:

$\min c^T y$

$ y_i\geq x_i$

$ y_i\geq -x_i$

$Ax \leq b$

If $c_i$ have the same sign you can easily adapt the same idea. I guess that also the case of generic sign can be dealt with.

  • This formulation only makes sense if $x_i$ is positive. If, for example, $x_i=-1$, there exists no $y_i$ that's both weakly smaller than $-1$ and weakly larger than $+1$. ... and of course, if you already know that $x_i$ is positive, then there's no need to worry about the absolute value. – nimmmiaumit Apr 17 '18 at 02:12
  • Actually, I just noted that this problem crept in with the edit... will try to undo the edit to reflect the original idea. – nimmmiaumit Apr 17 '18 at 02:14
  • This (original approach, before the edit by @kvathe) works for positive $c_i$, but I can't adapt it to negative $c_i$. Any leads, @AndreaCassioli? – nimmmiaumit Apr 17 '18 at 02:17
  • 1
    I think the first set of constraints should be $y_i \ge x_i$; i.e. inequality is backwards. – dashnick Aug 02 '20 at 13:33
4

$$ min |X_a-X_b| $$ can be written as

$$ min (X_{ab1}+X_{ab2}) $$

Such that

$$ X_a -X_b = X_{ab1} - X_{ab2}; $$

$$ X_{ab1}, X_{ab2} >=0; $$

tuxdna
  • 103
1

$$ \begin{array}{rcr} \min_{\mathbf{u},\mathbf{x}} \mathbf{1}^\mathrm{T}\mathbf{u} & \mathrm{s.t.} & \mathbf{x}-\mathbf{u} \preceq \mathrm{0} \\ & & -\mathbf{x}-\mathbf{u} \preceq \mathrm{0} \\ & & \mathbf{Ax} \preceq \mathrm{b} \\ \end{array} $$

where $\preceq$ is meant to signify an element-wise less-than relationship. I also left out $\mathbf{c}$ since you can just scale the elements of $\mathbf{x}$ to be $c_i x_i$.

As for implementation, some software may require you to combine $\mathbf{x}$ with the dummy vector $\mathbf{u}$. In which case, the problem might look like

$$ \begin{array}{rcr} \min_{\mathbf{u},\mathbf{x}} \left[ \mathbf{1} \,\, \mathbf{0}\right]^\mathrm{T}\left[ \mathbf{u} \,\, \mathbf{x} \right] & \mathrm{s.t.} & \mathbf{x}-\mathbf{u} \preceq \mathrm{0} \\ & & -\mathbf{x}-\mathbf{u} \preceq \mathrm{0} \\ & & \mathbf{Ax} \preceq \mathrm{b} \\ \end{array} $$

You would then just take the portion of the final answer that corresponds to $\mathbf{x}$.