For a software project I'm involved on, I have a situation where I have a large vector that is the sum of some smaller vectors. I know all the possible small vectors, and I know that no two of them start at the same place. Therefore, I think I can solve the problem (determine which small vectors are in the large one) if I put each small vector into each possible position as columns in a large matrix $A$. If $b$ is the signal (a particular sum of parts), I would need to solve this: $$ \min \|x\|_1 \text{ where } Ax = b \text{ and } x_i\in\{0,1\} \text{ for all } i. $$ From what I've read from Stanford, I should be able to get the solution that minimizes the sum of x using $A^T (AA^T)^{−1}b$. Indeed that solution seems to have, um, hills where I should have a 1. However, it is not what I would call a sparse solution. And there is no way for me to threshold the data to get the right answer as it varies too much. Any ideas on how to solve this more effectively?
Here's an example:
Suppose I have two small vectors: $[3, 4, 5]$ and $[2, -1, -2]$. Suppose that the first starts at position 0 and 3 and the second starts at position 2 in the final signal. You would then have this sum/signal/$b$: $[3, 4, 7, 2, 2, 5]$. The $A$ in this scenario would be drawn up like this:
$$ \left[\begin{array}{rrrrrr} 3 & 0 & 0 & 0 & 2 & 0 & 0 & 0 \\ 4 & 3 & 0 & 0 &-1 & 2 & 0 & 0 \\ 5 & 4 & 3 & 0 &-2 &-1 & 2 & 0 \\ 0 & 5 & 4 & 3 & 0 &-2 &-1 & 2 \\ 0 & 0 & 5 & 4 & 0 & 0 &-2 &-1 \\ 0 & 0 & 0 & 5 & 0 & 0 & 0 &-2 \end{array}\right] $$
I want to perform some kind of solve operation to return me this $x$: $[1, 0, 0, 1, 0, 0, 1, 0]$.