I don't know who is the original author of this proof, but I have seen it more than one time.
Idea:
The general idea is that, for any sequence of the form $$(0,0,\ldots,0,1,1,\ldots,1,0,0,\ldots,0),$$ the sequence of differences is $$(0,0,\ldots,0,1,0,0,\ldots,0,-1,0,0,\ldots,0),$$ that is, it contains at most two entries, one $1$ and one $-1$. Hence, we can manipulate any submatrix to make it simpler and calculate its determinant.
Proof:
Let's assume that the matrix has row-oriented consecutive ones property (for column-oriented version work with transposed matrix). Let $A$ be any square submatrix; of course, it also has the consecutive ones property.
Define matrix $B$ by
$$
b_{r,c} =
\begin{cases}
a_{r,c} - a_{r,c+1} & \text{ for }c+1 \leq \mathrm{columns}(A), \\
a_{r,c} & \text{ otherwise}.
\end{cases}
$$
By C1P, the matrix $B$ has in each row at most two entries. There are three cases:
- If some row has no non-zero entries, the determinant is zero.
- If some row has one non-zero entry, we could perform Laplace expansion along that row and consider further the only minor whose coefficient is non-zero.
- After all the expansions, all the rows left (if there are any) has exactly two non-zero entries, one $1$ and one $-1$. Call this matrix $B'$ and observe that $$B' \left[\begin{array}{c}1\\1\\\vdots\\1\end{array}\right] = \left[\begin{array}{c}0\\0\\\vdots\\0\end{array}\right],$$ in other words, $B'$ is singular and its determinant is zero.
Hence, the determinant of $B$ is in $\{-1,0,1\}$, and by its definition, the determinant of $A$ is also in $\{-1,0,1\}$. However, $A$ was any square submatrix, therefore, we have proved the total unimodularity of the original matrix.
I hope this helps $\ddot\smile$