23

The following matrix is clearly invertible - but how to show it cleverly?

\begin{bmatrix} 1 & 1 & 1 & 0 \\ 1 & 1 & 0 & 1 \\ 1 & 0 & 1 & 1 \\ 0 & 1 & 1 & 1 \end{bmatrix}

My idea - consider the null space of the matrix. This is essentially equivalent to the following system: $$ \left\{ \begin{matrix} a + b + c = 0 \\ a + b + d = 0 \\ a + c + d = 0 \\ b + c + d = 0 \end{matrix} \right. $$ Adding these equations, you get
$$3\cdot(a+b+c+d) = 0 \implies a+b+c+d = 0$$

Subtracting away each original equation from this new one, we immediately get that $a=b=c=d=0$, which implies that that the null space is {0} and therefore the matrix is invertible. But I'm not satisfied with this solution - I feel like there should be some clever way to do this entirely with simple matrix and/or row operations. Any ideas?

  • 3
    Do you know how to calculate a determinant? You can calculate it using row opérations. – eti902 Nov 19 '24 at 22:57
  • 7
    See also this post. Your question (if you reverse the order of the rows, which maintains invertibility) is just an instance of this general problem, so any of the approaches there apply. – Ben Grossmann Nov 20 '24 at 00:05
  • 1
    Your determinant differs by a sign from the determinants considered here: https://math.stackexchange.com/questions/81016/determinant-of-a-specific-circulant-matrix-a-n/87294#87294 – tkf Nov 20 '24 at 22:30

12 Answers12

28

If the matrix is viewed as a real or complex one, note that its square is $I+2ee^T$ (here $e$ denotes the vector of ones), which is positive definite. Hence the matrix is invertible.

user1551
  • 149,263
15

Here is another way to manage this issue.

It is by no means shorter than your solution but is not unusual for this kind of matrices (see remark below).

Let $\bf{1}$ be the column matrix with all entries equal to $1$. The given matrix, call it $M$, can be written:

$$M=\bf{1.1^T}-J \ \ \text{where} \ \ J:=\pmatrix{0&0&0&1\\0&0&1&0\\0&1&0&0\\1&0&0&0}.$$

Now, let us look for an inverse of the form :

$$N=a\bf{1.1^T}+bJ \tag{1}$$

Let us expand :

$$M.N=(\bf{1.1^T}-J)(a\bf{1.1^T}+bJ) \tag{2}$$

using the following facts :

  • $\bf{1.1^T.1.1^T}=4 \bf{1.1^T}$,

  • $J \bf{1} = \bf{1}$

  • $\bf{1^T}J = \bf{1^T}$,

  • $J^2=I$ (identity matrix),

we get:

$$M.N=(4a+b-a)\bf{1.1^T}-b.I \tag{3}$$

In order that the RHS of (2) is equal to identity matrix $I$, it is sufficient (and in fact necessary) that $-b=1$ and that $3a+b=0$ giving $a=\tfrac{1}{3}$.

Using these values of $a$ and $b$ in (1) proves the existence of an/the inverse under the form :

$$N=\tfrac{1}{3}\bf{1.1^T}-J \tag{4}$$

This proof can be extended at once to a $n \times n$ matrix $M$ with the same structure, $\tfrac{1}{3}$ being replaced by $\tfrac{1}{n-1}$.

Important remark: A matrix such as $M=-J+\bf{1.1^T}$ is called a "rank-one perturbation" of matrix $-J$. I could have used directly the Sherman-Morrison formula which is an explicit formula for the inverse.

Jean Marie
  • 88,997
  • I always love a constructive answer. The notation $4\mathbf{1}.\mathbf{1}^{\mathbf T}$ is abhorrent, though (even with volitional boldface)! Makes me long back to my quantum mechanics days, $4|\mathbf{1}\rangle\langle\mathbf{1}|$ is much clearer. $4(\mathbf{1}\otimes\mathbf{1})$ also works quite well. – leftaroundabout Nov 21 '24 at 02:06
  • 1
    @leftroundabout About the notation $11^T$, I just stick to the now classical convention (having its origin in numerical linear algebra) which for example is used in the above mentionned article on Sherman-Morrison formula. – Jean Marie Nov 21 '24 at 07:51
11

Let $A$ be the matrix will only ones and $$B= \begin{pmatrix} 0&0&0&1\\ 0&0&1&0\\ 0&1&0&0\\ 1&0&0&0 \end{pmatrix}$$ It isn't too hard to see that $B^2=I$ (so $B^{-1}=B$), $A^2=4A$, and $BA=AB=A$. Your matrix is $A-B$. Guess that the inverse is in the form $xI+yA+zB$ then \begin{align*} (A-B)(xI+yA+zB)&=xA+yA^2+zBA-xB-yAB-zB^2\\ &=xA+4yA+zA-xB-yA-zI\\ &=(x+3y+z)A-xB-zI\\ &\overset{!}{=}I \end{align*} Hence, $z=-1$, $x=0$ and $y=\frac{1}{3}$. So the inverse is $\frac{1}{3}A-B$, or $$\frac{1}{3} \begin{pmatrix} 1&1&1&-2\\ 1&1&-2&1\\ 1&-2&1&1\\ -2&1&1&1 \end{pmatrix}$$


In general, if $A$ and $B$ are commuting matrices, and you have a matrix $M\in R[A,B]$ polynomial in $A$ and $B$ (where $R$ is the ring of coefficients of $A$, $B$, $M$), then the inverse of $M$, if it exists, is also polynomial in $A$ and $B$.

Proof. By the Cayley hamilton theorem, $\chi_M(M)=M^n+a_{n-1}M^{n-1}+...+a_0I=0$ and hence if $M$ is invertible, its inverse is a polynomial in $M$, $M^{-1}=-a_0^{-1}M^{n-1}-...-a_0^{-1}a_1$. Therefore, it is also a polynomial in $A$ and $B$. $$\square$$ In our case, a polynomial in $A$ and $B$ is just an expression $xI+yA+zB$ as the squares of $A$ and $B$ can be reduced to linear combinations of $A$, $B$, $I$.

Joshua Tilley
  • 10,106
10

Your matrix has three obvious eigenvectors: $$\pmatrix{1\\1\\1\\1},\pmatrix{1\\0\\0\\-1},\pmatrix{0\\1\\-1\\0},$$ pairwise orthogonal. A fourth one, orthogonal to them, is $$\pmatrix{1\\-1\\-1\\1}$$ (it was easy to guess since the matrix is symmetric).

Since the corresponding eigenvalues are $3,1,1,-1$, the matrix is invertible in $M_4(\Bbb R)$ or more generally in $M_4(K)$ for any field $K$ with characteristic $\ne3$, and it is singular in $M_4(K)$ for any field $K$ with characteristic $3$.

Anne Bauval
  • 49,005
  • A different track : you can use the fact that a $n \times n$ circulant matrix (as is the case here) possesse $n$ independent eigenvectors with entries based on the $n$th roots of unity ; see here. The associated eigenvalues are also known. – Jean Marie Nov 20 '24 at 18:34
  • "Since the corresponding eigenvalues are 3,1,1,−1" Are you missing negative signs for the middle eigenvalues? – Acccumulation Nov 22 '24 at 20:24
  • @Acccumulation There are none. – Anne Bauval Nov 22 '24 at 22:14
8

The given solution is clever and short enough. But ok, we are searching for "one more quick way". We apply a permutation matrix on the rows or columns of the given matrix, this is equivalent with multiplication with a permutation matrix, and obtain the matrix which is in the same time singular or not: $$ B = \begin{bmatrix} 0 & 1 & 1 & 1 \\ 1 & 0 & 1 & 1 \\ 1 & 1 & 0 & 1 \\ 1 & 1 & 1 & 0 \end{bmatrix} $$ $B$ has the obvious eigenvalue $3$ (add the rows in $B-3I$ as OP did to get a zero row). It also has the eigenvalue $-1$ with multiplicity $3$, we see immediately three independent vectors in the kernel of $B+I$, they are $(1,0,0,-1)$, and $(0,1,0,-1)$, and $(0,0,1,-1)$, or see that the image of $B+I$ has rank one. So the eigenvalues of $B$, taken with multiplicity, are $3,-1,-1,-1$. There is no zero in the list, so the matrix is non-singular.

dan_fulea
  • 37,952
6

If I interpret it correctly, you're looking for the matrix formalism of your method, aren't you?

Note that each equation is simply a row in the extended matrix $$\left[ \begin{array}{cccc|c} 1 & 1 & 1 & 0 &0 \\ 1 & 1 & 0 & 1 &0 \\ 1 & 0 & 1 & 1&0 \\ 0 & 1 & 1 & 1& 0 \end{array}\right] $$

and all your operations are simple matrix operations (adding one row to another, multipying one row by a constant $\neq 0$, swapping two rows).

So, adding rows 1,2,3 to the 4th row yields $$\left[ \begin{array}{cccc|c} 1 & 1 & 1 & 0 &0 \\ 1 & 1 & 0 & 1 &0 \\ 1 & 0 & 1 & 1&0 \\ 3 & 3 & 3 & 3& 0 \end{array}\right] $$ Dividing the last row by three then

$$\left[ \begin{array}{cccc|c} 1 & 1 & 1 & 0 &0 \\ 1 & 1 & 0 & 1 &0 \\ 1 & 0 & 1 & 1&0 \\ 1 & 1 & 1 & 1& 0 \end{array}\right] $$

Subtracting the forth row from all other rows then yields $$\left[ \begin{array}{cccc|c} 0 & 0 & 0 & -1 &0 \\ 0 & 0 & -1 & 0 &0 \\ 0 & -1 & 0 & 0 &0 \\ 1 & 1 & 1 & 1& 0 \end{array}\right] $$

Adding all rows to the forth row gives $$\left[ \begin{array}{cccc|c} 0 & 0 & 0 & -1 &0 \\ 0 & 0 & -1 & 0 &0 \\ 0 & -1 & 0 & 0 &0 \\ 1 & 0 & 0 & 0& 0 \end{array}\right] $$

of which one can already tell that it's invertible.

If not, the next steps would be to swap the rows so you end up at a diagonal matrix.

ConnFus
  • 1,387
4

$ \def\a{\alpha} \def\b{\beta} \def\o{{\tt1}} \def\ao{\a_{-\o}} \def\q{\quad} \def\qq{\qquad} $The matrix is the all-ones matrix $J$ minus the counter-diagonal matrix $K$ $$ M=(J-K)\,\in\,{\mathbb R}^{n\times n} $$ The $K$ matrix reverses the order of the rows/cols of a matrix when multiplied on the left/right. These reversals have no effect on $J$, so we have the following nice algebraic properties $$ J = KJ = JK,\qq K^2=I,\qq J^2 = nJ $$ This allows you to easily calculate the first few powers of $M$ and notice that $$\eqalign{ M^p &= \begin{cases} \a_p\,J +\ I\, \q {\rm if}\;p={\sf even} \\ \a_p\,J -K \q {\rm if}\;p={\sf odd} \\ \end{cases} }$$ Since $p=-1$ is an odd power this suggests the ansatz $$ M^{-1} = \ao\,I-K $$ which can in fact be solved for $\ao$ $$\eqalign{ I &= M^{-1}M &= (\ao\,J -K)(J-K) \q\implies\q\ao &= \frac{\o}{n-\o} \\ }$$ Thus not only does the inverse exist, but you can calculate it for any value of $n$.

greg
  • 40,033
3

Any way of showing that a matrix is non-singular can show you that it is invertible. You can:

  • Check that all of the column vectors are linearly independent
  • Verify that the matrix has a nonzero determinant
  • Put the matrix into row-echelon form and confirm that it is full rank

among other methods. The third method can be done with only row operations. A really excellent video presenting the intuition behind this is this one in 3Blue1Brown's "Essence of Linear Algebra" series, which explains how you can interpret a singular linear transformation as one that compresses the vectors in a vector space into ones that all lie on a smaller vector space, so you lose some "information" about the original vectors through the transformation, making it no longer possible to extract the information that you'd need to invert the new vectors back into the original ones.

3

Write your matrix as $A + \mathbf{1}\;\mathbf{1}^T$, where $A$ has entries $-1$ on the anti-diagonal and zeros elsewhere, and $\mathbf{1}$ is a vector of all ones. Note that $A^{-1} = A$. The matrix determinant lemma implies that $$\det(A + \mathbf{1}\; \mathbf{1}^T) = (1 + \mathbf{1}^T A \mathbf{1}) \det(A) = (1-n)(-1)^n \neq 0,$$ where $n$ is the dimension of the matrix. Hence the original matrix is invertible.

Yly
  • 15,791
1

There exists a simple puzzle whereby you are given the incomplete sums of $ N > 1 $ values and asked to derive the original values with the solution that taking the sum of sums and dividing by $ N - 1 $ reveals the complete sum of the $ N $ values after which each individual value is readily computed by subtraction of the incomplete sum from the complete sum.

In equation form with $ N = 4 $ this is equivalent to the following equations:

$$ a + b + c = w \\ a + b + d = x \\ a + c + d = y \\ b + c + d = z $$

or as a matrix:

$$ \begin{pmatrix} 1 & 1 & 1 & 0 \\ 1 & 1 & 0 & 1 \\ 1 & 0 & 1 & 1 \\ 0 & 1 & 1 & 1 \end{pmatrix} \begin{pmatrix} a \\ b \\ c \\ d \end{pmatrix} = \begin{pmatrix} w \\ x \\ y \\ z \end{pmatrix} $$

The puzzle shows that it's always possible to find $ a, b, c, d $ giving $ w, x, y, z $ which means that there must be an inverse matrix. From the puzzle solution this takes the form of a matrix of $ \frac 1 {N - 1} $ values but with $ \frac {2 - N} {N - 1} $ on the antidiagonal.

Neil
  • 203
1

We may be tempted to calculate the determinant of a block matrix with $$ \det \begin{pmatrix} A & B \\ C & D \end{pmatrix} = \det(AD-CB),$$ and we can as long as, for example, $D$ is invertible and $AC = CA$.

Our matrix does not satisfy those conditions, but we can rearrange the rows to fix this:

$$ \det \begin{pmatrix} 1 & 1 & 1 & 0 \\ 1 & 1 & 0 & 1 \\ 1 & 0 & 1 & 1 \\ 0 & 1 & 1 & 1 \end{pmatrix} = \det \begin{pmatrix} 1 & 0 & 1 & 1 \\ 0 & 1 & 1 & 1 \\ 1 & 1 & 1 & 0 \\ 1 & 1 & 0 & 1 \end{pmatrix} = \det \left(\begin{array}{cc:cc} 1 & 0 & 1 & 1 \\ 0 & 1 & 1 & 1 \\ \hdashline 1 & 1 & 1 & 0 \\ 1 & 1 & 0 & 1 \end{array}\right)$$

which does satisfy the conditions since $A$ and $D$ are the identity matrix. Then the determinant is

$$ \det \left( \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} - \begin{pmatrix} 2 & 2 \\ 2 & 2 \end{pmatrix} \right) = \det \begin{pmatrix} -1 & 2 \\ 2 & -1 \end{pmatrix} = -3 \neq 0.$$

1

Summing all the columns shows that the column space contains the all 1's vector and by subtracting each column from this you obtain the standard basis of $\mathbb{R}^4$ hence the column space has dimension 4 and the matrix is invertible.

Ivan
  • 937