2

I have $K$ images $I_i, i\in{1 \ldots K}$ of the size $M \times N$. I wish to find weights $w_i$, s.t. $w_i \in [0,1]$ and $\sum_1^K w_i = 1$ so that $$|\sum_{i=1}^{K} w_i I_i|_{TV}$$ is minimal. I can use also something as $L^2$ norm of the finite differences or other metric of the smoothness.

I believe it shall be easier problem than TV regularization as I am optimizing small number of weights. On the other hand requiring sum of weights being one makes it nonlinear optimization problem even for quadratic minimizers.

How you would attack this problem?

VojtaK
  • 370
  • Can you share such set of images? I'd like to try solve it and share the solution and code. – Royi Oct 28 '24 at 19:36

2 Answers2

1

Let

$$ \mathbf{A} = \left[ \mathrm{vec}(\mathbf{I}_1), \mathrm{vec}(\mathbf{I}_2), \dots, \mathrm{vec}(\mathbf{I}_K) \right] \in \mathbb{C}^{MN \times K}, $$

then the problem, if I understand things correctly, that you want to solve is

$$ \mathbf{w}^\ast = \arg \min_\mathbf{w} \mathrm{TV}( \mathbf{Aw} ) \,\,\,\, \mathrm{s.t.} \,\,\,\, \mathbf{1}^\mathrm{T} \mathbf{w} = 1. $$

The above can be reformulated as

$$ \mathbf{w}^\ast = \arg \min_\mathbf{w} \Vert \mathbf{DAw} \Vert_1 + \lambda \cdot \Vert \mathbf{B} \mathbf{w} - \mathbf{y} \Vert_2, $$

where $\mathbf{D}$ is the first-order difference operator and $\lambda \in \mathbb{R}$ is user-chosen weight based on your tolerance for how close $\mathbf{Bw}$ must match $\mathbf{y}$. For your case, $\mathbf{B}$ is just a vector of all ones and $\mathbf{y}$ is just a scalar with value one.

The above is called a least-mixed norm problem and you can find several methods for solving it online.

However, it just occurred to me that you have the added constraint that $w_i \in [0,1]$, which might not be handled in anything you find. I am going to leave this answer anyways, since you may be able augment a least-mixed norm solver with a step that projects the current estimate onto the feasible set on each iteration.

0

Let

$$ \boldsymbol{A} = \left[ \operatorname{vec}(\boldsymbol{I}_1), \operatorname{vec}(\boldsymbol{I}_2), \dots, \operatorname{vec}(\boldsymbol{I}_K) \right] \in \mathbb{R}^{MN \times K} $$

Your objective can be written as:

$$ \arg \min_{\boldsymbol{w}} {\left\| \boldsymbol{A} \boldsymbol{w} \right\|}_{1}, \; \text{ subject to } \; \boldsymbol{w} \in \mathcal{\Delta}_{K} $$

Where $\mathcal{\Delta}_{K}$ is the Probabilistic Simplex.

There is an efficient way to project onto the Unit Simplex.
See Orthogonal Projection onto the Unit Simplex.

Yet, the objective function is non smooth.
One simple way to solve it, though slow, is the Projected Sub Gradient Method.

Another way is to convert the ${L}_{1}$ norm into Linear Programming problem.
The way to achieve that is given in:

Another way would be converting it into an IRLS problem as in Least Absolute Deviation (LAD) Line Fitting / Regression. Though it might require a lot of memory.

Royi
  • 10,050