3

Let $\mathbf{B}$ be a definite positive square matrix of size $n \times n$, and $\mathbf{b}$ an $n$-sized vector. It can be shown that the solution of $\arg\min_x \left(\mathbf{x}^T \mathbf{B} \mathbf{x} - 2\mathbf{b}^T \mathbf{x}\right)$ is $\mathbf{B}^{-1} \mathbf{b}$. Consequently, we can assert that $\mathbf{b} \mapsto \arg\min_x \left(\mathbf{x}^T \mathbf{B} \mathbf{x} - 2\mathbf{b}^T \mathbf{x}\right)$ constitutes a linear mapping from $\mathbb{R}^n$ to $\mathbb{R}^n$.

However, when we introduce a positivity constraint, i.e., when we seek the solution in the set of non-negative vectors, I suspect the application $\mathbf{b} \mapsto \arg\min_{\mathbf{x} \geq 0} \left(\mathbf{x}^T \mathbf{B} \mathbf{x} - 2\mathbf{b}^T \mathbf{x}\right)$ to be a piecewise linear function, as my numerical experiments show. What approaches or methods can be pursued to demonstrate this type of result?

EDIT:

1D case, $B$ is a strictly positive number. Then $b \mapsto \arg\min_{\mathbf{x} \geq 0} \left(B x^2-2bx\right)$ is simply $\max(0, b/B)$

cyril
  • 33

1 Answers1

1

Section 6.6 in the reference below has the result you are looking for. In particular, check out Theorem 6.11 therein and the discussion immediately before it.

Georg Still (2018). Lectures on Parametric Optimization: An Introduction. Available Online at https://optimization-online.org/wp-content/uploads/2018/04/6587.pdf

ProAmateur
  • 1,828