4

Let ${\bf A} \in \mathbb{R}^{n \times n}$ be a symmetric positive semidefinite (PSD) matrix, let ${\bf a} \in \mathbb{R}^n$ and let ${\bf B} \in \mathbb{R}^{n \times n}$ be a symmetric indefinite matrix with symmetric eigenvalues ($\pm \lambda_i \neq 0$). Let $c > 0$. Consider the optimization problem

$$ \begin{array}{ll} \underset {{\bf x} \in \mathbb{R}^n} {\text{minimize}} & {\bf x}^\top {\bf A} \, {\bf x} + {\bf a}^\top {\bf x} \\ \text{subject to} & {\bf x}^\top {\bf B} \, {\bf x} = 0\\ & 0 \le x_i \le c \end{array} $$

I am aware of the fact that this is not a convex problem because of the quadratic equality constraint. Nevertheless, I did find a paper that proposes methods for quadratic optimization under just one quadratic equality constraint. But this doesn't quite fit my problem.

How would one tackle it, and ideally, are there good free solvers available?

Winger 14
  • 2,329
  • Just to clarify. The matrix $B$ has both positive and negative eigenvalues given by some $\pm \lambda_i$ – Winger 14 Jul 22 '23 at 09:15
  • @mark leeds You should erase these comments. – Jean Marie Jul 22 '23 at 09:28
  • Conditions on matrix $B$ imply that dimension $n$ is necessarily even. – Jean Marie Jul 22 '23 at 09:32
  • Indeed that is the case – Winger 14 Jul 22 '23 at 09:35
  • 1
    @jean Marie: I didn't notice the $\pm$ eigenvalues. I guess it was too late !!! thanks. – mark leeds Jul 22 '23 at 22:48
  • You can take advantage of the fact that a symmetric matrix having symmetric eigenvalues is similar to a matrix with this structure : $B=\pmatrix{0&C\C^T&0}$. – Jean Marie Jul 23 '23 at 04:55
  • @Jean Marie : That's interesting and good to know. Thanks. One last bother on this. If it was the case that B was positive semi-definite then would the approach that I originally suggested be viable as a solution to eliminating the constraint ? Thanks again. – mark leeds Jul 23 '23 at 07:30
  • @Jean Marie: Set just said enough in his answer for me to spend much time trying to piece it together. It sounds very complicated so, no worries regarding my question. – mark leeds Jul 23 '23 at 10:25
  • You could introduce slack variables $y_i^2 := x_i$ (to ensure non-negativity) and $s_i$ (to satisfy the other inequality via $y_i^2 + s_i^2 = c$) – Rodrigo de Azevedo Jul 23 '23 at 12:26

2 Answers2

5

This is really more of a comment than a complete answer, but in case you are not aware, a slightly simpler version of your quadratic program can be reformulated as,

\begin{align} \text{minimize}\;\;&\langle A,X\rangle\\[4pt] \text{subject to}\;\;&\langle B,X\rangle=0\\[1pt] &\text{rank}(X)=1\\[1pt] &X\succeq 0 \end{align}

where the final two constraints ensure that $x$ may be recovered from $X$ via its outer product (i.e. $X=xx^T$).

Your actual problem can be handled in a similar fashion by first using a process called homogenization.

The non-convex aspect has now been isolated in the rank constraint, which, if removed, will yield a semidefinite relaxation of your quadratic program, this relaxation is canonical, and people have studied it in detail (I have not however).

Set
  • 8,251
  • 2
    And how about the box constraint? – Kroki Jul 23 '23 at 10:33
  • @Kroki both the linear term and the box constraint can be homogenized, it just gets a bit messy. – Set Jul 23 '23 at 10:40
  • 1
    Given a solution to the relaxation, how can we obtain a solution $x$? Using best approximation of rank 1? – Winger 14 Jul 23 '23 at 12:29
  • Or you can make the formulation stronger by introducing $x$ as well. – Kroki Jul 23 '23 at 14:00
  • 1
    @ClaudioMoneo Indeed, one way would be to take the largest eigenvector $v(X)\text{max}$ from the spectral decomposition of $X$, and obtain a solution $x=\sqrt{\lambda(X)\text{max}}v(X)_\text{max}$. – Set Jul 23 '23 at 19:56
4

The problem that you have can be written as:

$$ \begin{array}{ll} \underset {{\bf x} \in \mathbb{R}^n} {\text{minimize}} & \langle {\bf A}, \, {\bf X}\rangle + \langle {\bf a}, {\bf x}\rangle \\ \text{subject to} & \langle {\bf B}, \, {\bf X}\rangle = 0\\ & \begin{bmatrix}1 & {\bf x}^\top\\ {\bf x} & {\bf X}\end{bmatrix} \succeq 0\\ & \text{rank}({\bf X}) = 1\\ & 0 \le x_i \le c \end{array} $$

Removing the rank constraint yield to an SDP problem that should be easy to solve using any SDP solver which gives you lower bound on your problem. In the litterature there is some techniques to convexify rank constraints. The basic approach is to use what we call an iterative rank minimized approach. You can Google it and you will find multiple paper on the subject.

Kroki
  • 13,619