10

A matrix has the consecutive ones property (often abbreviated C1P) if its every row (or column, for column-oriented C1P) is of the form $(0,\ldots,0,1,\ldots,1,0,\ldots,0)$.

There is a theorem which says that any such matrix is totally unimodular, i.e. its every square submatrix has determinant $-1$, $0$ or $1$.

Naturally, this also holds if we could permute and/or transpose the matrix into one with consecutive ones property, however, how do I prove the theorem?

Context: The question was asked here, but the answer was to long for a comment, and didn't fit as edit into the original post.

dtldarek
  • 37,969
  • @K.C.Wu Not really. It's a naming convention, I'm assuming here row-oriented consecutive ones property, and the proof for the column-oriented version is similar (or you can work with transposed matrix). – dtldarek May 04 '14 at 17:33

1 Answers1

21

I don't know who is the original author of this proof, but I have seen it more than one time.

Idea:

The general idea is that, for any sequence of the form $$(0,0,\ldots,0,1,1,\ldots,1,0,0,\ldots,0),$$ the sequence of differences is $$(0,0,\ldots,0,1,0,0,\ldots,0,-1,0,0,\ldots,0),$$ that is, it contains at most two entries, one $1$ and one $-1$. Hence, we can manipulate any submatrix to make it simpler and calculate its determinant.

Proof:

Let's assume that the matrix has row-oriented consecutive ones property (for column-oriented version work with transposed matrix). Let $A$ be any square submatrix; of course, it also has the consecutive ones property.

Define matrix $B$ by

$$ b_{r,c} = \begin{cases} a_{r,c} - a_{r,c+1} & \text{ for }c+1 \leq \mathrm{columns}(A), \\ a_{r,c} & \text{ otherwise}. \end{cases} $$

By C1P, the matrix $B$ has in each row at most two entries. There are three cases:

  • If some row has no non-zero entries, the determinant is zero.
  • If some row has one non-zero entry, we could perform Laplace expansion along that row and consider further the only minor whose coefficient is non-zero.
  • After all the expansions, all the rows left (if there are any) has exactly two non-zero entries, one $1$ and one $-1$. Call this matrix $B'$ and observe that $$B' \left[\begin{array}{c}1\\1\\\vdots\\1\end{array}\right] = \left[\begin{array}{c}0\\0\\\vdots\\0\end{array}\right],$$ in other words, $B'$ is singular and its determinant is zero.

Hence, the determinant of $B$ is in $\{-1,0,1\}$, and by its definition, the determinant of $A$ is also in $\{-1,0,1\}$. However, $A$ was any square submatrix, therefore, we have proved the total unimodularity of the original matrix.

I hope this helps $\ddot\smile$

dtldarek
  • 37,969
  • What do we do in the second case where some row has exactly one non-zero entry, do we use induction? if yes then how do we apply the induction hypothesis or decide what the induction hypothesis should be for the proof to go through? – FullOfDoubts Mar 24 '24 at 12:32
  • Why is A a TUM if B is? – Ran Elgiser Aug 11 '24 at 16:15