Questions tagged [sparsity]

64 questions
14
votes
2 answers

Are greedy methods such as orthogonal matching pursuit considered obsolete for finding sparse solutions?

When researchers first began seeking sparse solutions to $Ax = b$, they used greedy methods such as orthogonal matching pursuit (OMP). In OMP, we activate components of $x$ one by one, and at each stage we select the component $i$ such that the…
littleO
  • 54,048
8
votes
2 answers

How can the least-norm problem in the $1$-norm be reduced to a linear program?

Problem Statement Show how the $L_1$-sparse reconstruction problem: $$\min_{x}{\left\lVert x\right\rVert}_1 \quad \text{subject to} \; y=Ax$$ can be reduced to a linear program of form similar to: $$\min_{u}{b^Tu} \quad \text{subject to} \; Gu=h,…
7
votes
1 answer

Sparse PCA vs Orthogonal Matching Pursuit

Can't wrap my head around the difference between Sparse PCA and OMP. Both try to find a sparse linear combination. Of course, the optimization criteria is different. In Sparse PCA we have: \begin{aligned} \max & x^{T} \Sigma x \\ \text { subject to…
5
votes
1 answer

Sparse Approximation in the Mahalanobis Distance

Given a vector $z \in \mathbb{R}^n$ and $k < n$, finding the best $k$-sparse approximation to $z$ in terms of the Euclidean distance means solving $$\min_{\{x \in \mathbb{R}^n : ||x||_0 \le k\}} ||z - x||_2$$ This can easily be done by choosing $x$…
5
votes
0 answers

Controlling the number of nonzero components in the LASSO solution

Let $A$ be a real $m \times n$ matrix. The Lasso optimization problem is $$ \text{minimize} \quad \frac12 \| Ax - b \|_2^2 + \lambda \| x \|_1 $$ The optimization variable is $x \in \mathbb R^n$. The $\ell_1$-norm regularization term encourages…
littleO
  • 54,048
4
votes
1 answer

Finding a sparse solution to $A x = b$ via linear programming

I'm trying to solve a system $Ax = b$ where all entries of $x$ are nonnegative, and most are zero. So if $x$ has $N$ entries, then $\epsilon N$ of them are nonzero, where $\epsilon > 0$ is some small constant. Is it possible to use linear…
3
votes
2 answers

Finding the unit vector minimizing the sum of the absolute values of the projections of a set of points

Consider $$ \min_{\mathbf{w} \in \mathbb{R}^d} \|\mathbf{X}^T\mathbf{w}\|_1 \qquad\text{subject to } \quad \|\mathbf{w}\|_2^2=1, $$ where $\mathbf{X}\in\mathbb{R}^{d\times m}$ is a set of $d$-dimensional points and $m > d$. How can I solve this…
3
votes
5 answers

Compressive sensing with non-square matrices

I am implementing the algorithm in this paper. However, I have run into a problem with my solver for the linear program. I need to solve a linear program where I minimise the $1$-norm of a vector subject to the constraint that the vector, when…
3
votes
2 answers

Stability of the Solution of $ {L}_{1} $ Regularized Least Squares (LASSO) Against Inclusion of Redundant Elements

The problem of finding $$ \substack{{\rm min}\\x}\left( \|Ax-b\|^2_2+\lambda \|x\|_1\right),$$ where $\|\cdot\|_2$ and $\|\cdot\|_1$ are the $L_2$ and $L_1$ norms, respectively, is usually called the LASSO. $A$ is a matrix, $x$ and $b$ are…
thedude
  • 1,907
3
votes
1 answer

If $ {L}_{0} $ Regularization Can be Done via the Proximal Operator, Why Are People Still Using LASSO?

I have just learned that a general framework in constrained optimization is called "proximal gradient optimization". It is interesting that the $\ell_0$ "norm" is also associated with a proximal operator. Hence, one can apply iterative hard…
2
votes
0 answers

When is the inverse of a sparse SPD matrix also sparse?

I have seen in several places that the inverse of a sparse matrix is generally not sparse, but I have failed to find more in-depth analysis than empirical or case-by-case studies. My question is the following : is there a general way to characterize…
2
votes
2 answers

Minimizing the number of non zero columns of a linear subspace of matrices

I'd like to solve the following minimization problem $$\min_{X_1,X_2} \mbox{nzc} (A+B_1X_1+B_2X_2)$$ where the $\mbox{nzc} (D)$ denotes the number of non-zero-columns in $D$, and where $X_i, A, B_i$ are matrices of appropriately chosen dimensions.…
Mathew
  • 1,928
2
votes
0 answers

Group lasso with weighted parameters and L0 norm penalty

I have explored the following hard problem for a long time. I need some help for the (possibly) final steps. Specifically, \begin{equation}\tag{1} \min_{\mathbf{x}\in\mathbf{R}^n}\left\{ f(\mathbf{x}):= \frac{1}{2}\|\mathbf{x}-\mathbf{v}\|_2^2 +…
suineg
  • 407
2
votes
0 answers

Newton Polytope of a symmetric polynomial with few vertices

For an $n$-variate polynomial $f = \sum_{a_1,\dotsc,a_n} x_1^{a_1}x_2^{a_2} \cdots x_n^{a_n}$, its Newton polytope $P_f$ is defined as the convex hull of all exponent vectors in the support of $f$. There are known examples where number of vertices…
2
votes
0 answers

Orthogonal projection into a sparse subspace with $s$ dimension

Traditional orthogonal projection of a given point $y \in \mathbb{R}^n$ into a closed and convex set $D\in \mathbb{R}^n$ is defined as the follwing: $$ P_D(y)=\arg\min_{x \in D}||x-y||_2^2 $$ Now suppose one wants to find the orthogonal projection…
1
2 3 4 5