2

I have a qudratic programming problem

$$J_{min} : \frac {1}{2}x^TQx + c^Tx \\ S.T \\ Ax \leq b \\ x \geq 0$$

But I have only a solver for linear programming using simplex method.

$$J_{max} : c^Tx \\ S.T \\ Ax \leq b \\ x \geq 0$$

I have heard that if I change simplex method so it will minimize $J_{max}$ instead of maximize it. That is called the Dual method. It's is an easy change with some transposes of the objective function.

But if I combine the Dual method with KKT conditions. Then I can solve qudratic programming problems?

If yes: How can I rewrite KKT conditions so they work with the Dual method?

If no: What have I missed? I know that it's possible to use linear programming to solve for qudratic programming.

Royi
  • 10,050
euraad
  • 3,052
  • 4
  • 35
  • 79

1 Answers1

2

Still me. It is possible to solve the quadratic programming with simplex method.

Method 1

The Lagrangian function of original problem is \begin{equation} L(x; \lambda_1, \lambda2) = \frac{1}{2}x^TQx + c^Tx + \lambda_1^T(Ax-b) - \lambda_2^T x, \quad \lambda_1, \lambda_2 \ge 0. \end{equation} And the KKT conditions are \begin{equation} \begin{array}{c} {\nabla_x L(x;\lambda_1,\lambda_2) = Qx+c+A\lambda_1-\lambda_2 = 0.} \\ {Ax-b \leq 0, \quad x \ge 0.} \\ {\lambda_1 \ge 0, \quad \lambda_2 \ge 0.} \\ {\lambda_1^T(Ax-b)=0, \quad \lambda_2^T x= 0.} \end{array} \end{equation} Except the last complementary slack condition, All of them are linear. We temperately drop this condition, and add one slack variable $y$, then can obtain \begin{equation} \begin{array}{c} {Qx+c+A\lambda_1-\lambda_2 = 0} \\ {Ax-b+y = 0} \\ {x \ge 0, \lambda_1 \ge 0, \lambda_2 \ge 0, y \ge 0} \end{array} \end{equation} And we can use simplex method to search feasible solution of the above constraints. However, to satisfied the complementary slack condition, we must take one of ${\lambda_1}_i$ and $y_i$ as zero. Similarly, we also must take one of ${\lambda_2}_i$ and $x_i$ as zero. In other words, we need to keep ${\lambda_1}_i$ and $y_i$, ${\lambda_2}_i$ and $x_i$ could not be basis variables at the same time.

Method 2

But now I guess what you want is another method which is called Frank-Wolfe algorithm. For the iterative point $x_k$ at $k$ step, we can compute the gradient \begin{equation} \nabla_x J(x_k) = Qx_k + c \end{equation} And we try to solve the following approximated problem: \begin{equation} \begin{array}{cl} {\min_{x}} & J(x_k)+ \nabla_{x}J(x_k)^T(x-x_k) \\ {} & {Ax -b \leq 0} \\ {} & {x \ge 0} \end{array} \end{equation} This is a linear programming in terms of $x$, and you can solve it directly by the Simplex method.

Zenan Li
  • 1,434
  • Thanks. Is this the same method? I just use KKT conditions with simplex? Do you have Matlab? Can you show how to solve a QP-problem with the function linprog? :) https://www.researchgate.net/publication/277908295_NEW_APPROACH_FOR_WOLFE'S_MODIFIED_SIMPLEX_METHOD_TO_SOLVE_QUADRATIC_PROGRAMMING_PROBLEMS – euraad Mar 29 '20 at 11:47
  • I took a brief look at this paper, and it seems that it is the same method. And I found that this answer may be will helpful to you. In this answer, it says that the 'quadprog' function in Matlab actually reproduce this active set method. – Zenan Li Mar 29 '20 at 15:46
  • So I can just follow this paper and I don't need to change anything on my Simplex solver? I wrote it by my self in C code. Don't want to change it again hehe. – euraad Mar 29 '20 at 18:24
  • @Royi, thanks a lot. Here are two different approaches. The first one is called Wolfe's modified simplex method (I guess), which is actually an active set method. And the second one is Frank-Wolfe algorithm. – Zenan Li Mar 30 '20 at 01:49
  • @DanielMårtensson, I think you still need to modify your simplex method, especially the selection of the basis variable. In this paper, they just drop the complementary slack condition because they can choose the optimal solution from the table. If you don't want to change anything in your code, I guess you can consider the second method. – Zenan Li Mar 30 '20 at 01:54
  • @Ze-NanLi, I think you better make it clear those are 2 different solutions. Something like ##Method 1 and ##Method 2. Also, regarding the 1st method, your last sentence isn't clear. – Royi Mar 30 '20 at 08:07
  • It's an answer, but I don't understanding it. What are the steps? As we have discuss, there are two methods. Your method and wolfe's method from the paper. It would be much better if we have a pratical example like $J = 1/2x'Qx + c'x$ and try to apply that with linprog in MATLAB or GLPK in Octave. I know both how to use QP function in Octave and GLPK in octave. Perhaps I can help you buildning up a working problem? :) – euraad Mar 30 '20 at 11:35
  • What do you think is best? I think that Method 1 looks promising. I still don't understand how you can rewrite the KKT into regular linear programming form? – euraad Mar 31 '20 at 21:22
  • I think I understand now. I need to have equality constraints? But can I not solve this with reqular equality quadratic programming? https://en.wikipedia.org/wiki/Quadratic_programming – euraad Mar 31 '20 at 22:24
  • By the way! This cannot be solved with simplex method. It need to be solved with Big M method due to simplex cannot use equality constraints. Big M can. – euraad Mar 31 '20 at 22:29