0

Given a set of vectors $ {\left\{ \boldsymbol{y}_{j} \right\}}_{j = 1}^{n} $ where $ \boldsymbol{y}_{j} \in \mathbb{R}^{m} $ find the optimal vector $ \boldsymbol{x} \in \mathbb{R}^{m} $ of the following convex optimization problem:

$$ \arg \min_{\boldsymbol{x}} \sum_{j = 1}^{n} {\left\| \boldsymbol{x} - \boldsymbol{y}_{j} \right\|}_{1} $$

Royi
  • 10,050

2 Answers2

0

If one concatenate all the vectors $ {\left\{ \boldsymbol{y}_{j} \right\}}_{j = 1}^{n} $ as the columns of the matrix $ Y $ one could write the equivalent problem:

$$ \arg \min_{\boldsymbol{x}} {\left\| \boldsymbol{x} \boldsymbol{1}_{n}^{T} - Y \right\|}_{1, 1} $$

Where $ {\left\| \cdot \right\|}_{1, 1} $ is the Entry Wise Matrix Norm.

Taking the derivative (Sub Gradient) of $$ {\left\| \boldsymbol{x} \boldsymbol{1}_{n}^{T} - Y \right\|}_{1, 1} $$ will yield:

$$ \frac{\partial }{\partial \boldsymbol{x}} {\left\| \boldsymbol{x} \boldsymbol{1}_{n}^{T} - Y \right\|}_{1, 1} = \operatorname{sign} \left( \boldsymbol{x} \boldsymbol{1}_{n}^{T} - Y \right) \boldsymbol{1}_{n} $$

Where $ \operatorname{sign} \left( \cdot \right) $ is the element wise Sign Function.

Since it is a convex problem the minimum must be a stationary point where the Sub Gradient vanishes. In order to do that (In a similar manner to The Median Minimizes the Sum of Absolute Deviations (The L1 Norm)) the value of $ {x}_{i} $ must be the median of the values $ \left\{ {y}_{i, 1}, {y}_{i, 2}, \ldots, {y}_{i, n} \right\} $.

It is nice to see that in this case solving the Median for each row of $ Y $ will yield the optimal solution (Which is not unique!).

A MATLAB code, including validation using CVX, can be found in my StackExchange Mathematics Q3566493 GitHub Repository.

Royi
  • 10,050
0

If one concatenate all the vectors $ {\left\{ \boldsymbol{y}_{j} \right\}}_{j = 1}^{n} $ as the columns of the matrix $ Y $ one could write the equivalent problem:

$$ \arg \min_{\boldsymbol{x}} {\left\| \boldsymbol{x} \boldsymbol{1}_{n}^{T} - Y \right\|}_{1, 1} $$

Where $ {\left\| \cdot \right\|}_{1, 1} $ is the Entry Wise Matrix Norm.

So basically we have $ \sum_{j = 1}^{n} \sum_{i = 1}^{m} \left| {x}_{i} - {y}_{i, j} \right| $.
One could employ the trick to move from $ {L}_{1} $ problem to Linear Programming (See Is There an $ {L}_{1} $ Norm Equivalent to Ordinary Least Squares?):

$$\begin{aligned} \arg \min_{\boldsymbol{x}, \boldsymbol{t}} \quad & \sum_{i = 1}^{mn} {t}_{i} \\ \text{subject to} \quad & {x}_{i} - {t}_{i, j} \leq {y}_{i, j} \quad i \in \left\{1, 2, \ldots, m \right\}, \, j \in \left\{1, 2, \ldots, n \right\} \\ \quad - & {x}_{i} - {t}_{i, j} \leq - {y}_{i, j} \quad i \in \left\{1, 2, \ldots, m \right\}, \, j \in \left\{1, 2, \ldots, n \right\} \end{aligned}$$

Writing this in Linear Programming form for LP Solver (See MATLAB's linprog()) will require the form:

$$\begin{aligned} \arg \min_{\boldsymbol{x}, \boldsymbol{t}} \quad & \boldsymbol{f}^{T} \begin{bmatrix} \boldsymbol{x} \\ \boldsymbol{t} \end{bmatrix} \\ \text{subject to} \quad & A x \preceq \boldsymbol{b} \end{aligned}$$

So for the case above:

$$ A = \begin{bmatrix} {I}_{m} & \vdots \\ \vdots & {I}_{mn} \\ {I}_{m} & \vdots \\ -{I}_{m} & \vdots \\ \vdots & -{I}_{mn} \\ -{I}_{m} & \vdots \end{bmatrix}, \; \boldsymbol{b} = \begin{bmatrix} \operatorname{Vec} \left( Y \right) \\ - \operatorname{Vec} \left( Y \right) \end{bmatrix} $$

Where $ \operatorname{Vec} \left( \cdot \right) $ is the Vectorization Operator.

Another point of view is given by:

$$ \arg \min_{\boldsymbol{x}} {\left\| C \boldsymbol{x} - \boldsymbol{d} \right\|}_{1} $$

Where $ C = \begin{bmatrix} {I}_{m} \\ \vdots \\ {I}_{m} \end{bmatrix} $ (Concatenation of the identity matrix $ {I}_{m} $ vertically $ n $ times) and $ \boldsymbol{d} = \operatorname{Vec} \left( Y \right) $

Then, building the Linear Programming Problem is easier to see (Though matrices are just as above).

A MATLAB code, including validation using CVX, can be found in my StackExchange Mathematics Q3566493 GitHub Repository.

Royi
  • 10,050