8

I want to find the inverse of a large upper triangular matrix where all its entries are 1.

Is there some trick to it or do I have to compute it using the usual way?

7 Answers7

12

What about $$\left[\begin{array}{ccccccc} 1 & -1 & 0 & 0 & 0&\dots\\ 0 & 1 & -1 & 0&0&\dots\\ 0 & 0 & 1 & -1 & 0&\dots\\ 0 & 0 & 0 & 1 & -1 &\dots\\ 0 & 0 & 0 & 0 & 1 & \dots\\ \vdots & \vdots & \vdots & \vdots & \vdots &\ddots \end{array}\right]$$

Gregory Grant
  • 15,324
  • 5
  • 38
  • 63
11

Such a matrix can be written as $I+N$ where $N$ is nilpotent.

Therefore, its inverse is $I-N+N^2-N^3+\cdots$, which is a finite sum because $N$ is nilpotent.

lhf
  • 221,500
5

Let $\eta$ be the $n\times n$ matrices with entries at upper sub-diagonal equal to $1$ and $0$ elsewhere.
Since $\eta^n = 0$, the upper triangular matrix you have can be written as $$\sum_{k=0}^{n-1} \eta^k = \sum_{k=0}^\infty \eta^k = (I_n -\eta)^{-1}$$ The inverse matrix you seek is $I_n - \eta$. i.e. the one given in Grant's answer.

achille hui
  • 125,323
2

Notice that, for $k\gt1$, the $k$th column minus the $(k-1)$th column is $\mathbf e_k$, which is also the $k$th column of the identity matrix. So, for $k\gt1$ the $k$th column of the inverse must have $1$ in the $k$th row, $-1$ in the $(k-1)$th row, and zeros everywhere else. The first column of the triangular matrix already equals the first column of the identity, so the first column of its inverse is also $(1,0,\dots,0)^T$. Putting this all together, you get $$\pmatrix{1&-1&0&0&0&\dots\\0&1&-1&0&0&\dots\\0&0&1&-1&0&\dots\\0&0&0&1&-1&\dots\\\vdots&\vdots&\vdots&\vdots&\vdots&\ddots}$$ just as in Gregory Grant’s answer.

amd
  • 55,082
2

lhf, I don't know what you mean by "the usual way," so I will add my two cents to the previous answers, which cover a more or less algebraic point of view. I suppose "the usual way" might mean Gaussian elimination. This is not what is done in practice.

In numerical analysis, to invert a square matrix A one looks for a factorization of $A$ of a special form (I am assuming the setting of numerical analysis, working with finite arithmetic representation of $\mathbb R$. There is computational research for algebraic problems on algebraic fields, but that is a different story). The factorization goes with the name of QR factorization, and it factors $A$ as $$ A = Q * U ,$$ with $Q$ orthogonal, and $U$ upper triangular. This is done because then (if $A^{-1}$ exists): $$ A^{-1} = U^{-1} * Q^* .$$ Here, the fact that $Q$ is orthogonal iplies $ Q^{-1} = Q^* $, the transposed of $Q$, so inverting $Q$ is an easy problem. Problems that require the inverse of $U$ are solved by backaward elimination, which is how one would do it if working with pencil and paper.

To find $U^{-1}$, solve the system: $U*x = a .$ For example, let's find the inverse of

$$ U = \left[ \begin{array}{cccc} 1 & 2 & -1 & 0 \\ 0 & 1 & 12 & -3 \\ 0 & 0 & 1 & -6 \\ 0 & 0 & 0 & 1 \\ \end{array} \right] $$

by solving the following system for $x,y,y,w :$ \begin{align} &x &+ 2y -z& & &=a\\ & &y &+12z &-3w &=b \\ & & &z &-6w &=c \\ & & & & w&=d \\ \end{align} From the last equation, $ w=d$. Substitute in the previous 3 equations and re-arrange: \begin{align} &x + & 2y &-z &=a\\ & & y & + 12z &=b &+3d\\ & & &z &=c &+6d\\ \end{align} From the last equation, $z=c+6d$. Substitute in the previous 2 equations and re-arrange: \begin{align} &x + & 2y &=a &+c & +6d\\ & & y &=b & -12c &-69d\\ \end{align} From the last equation, $y=b -12c-69d$. Substituting in the previous equation and simplifying: \begin{align} &x &=a -2b +25c +144d .\\ \end{align} Putting all together: \begin{align} x&= a &-2b &+25c &+144d\\ y&= &b &-12c&-69d\\ z&=&&c&+6d \\ w&=&&&d \\ \end{align}

We can read $U^{-1}$ from the las expression: $$ U^{-1} = \left[ \begin{array}{cccc} 1 & -2 & 25 & 144 \\ 0 & 1 & -12 & -69 \\ 0 & 0 & 1 & -6 \\ 0 & 0 & 0 & 1 \\ \end{array} \right] $$ $$\dots\dots\dots$$

1

If you look at matrix multiplication as explained in this post, you can pretty much see what the inverse in this case. I will give a detailed explanation here, but I would normally go through this in my head.
Let A be the matrix you described, in the case of $n=3$, then we have $\newcommand{\vek}[1]{\boldsymbol{#1}}$ \begin{align} A = \begin{pmatrix} & & \\ \vek{a}_1 & \vek{a}_2 & \vek{a}_3 \\ & & \\ \end{pmatrix} = \begin{pmatrix} 1 & 1 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \\ \end{pmatrix} \end{align} Also let $\vek{e}_j$ denote the canonical base vectors of the three dimensional vector space. If you red the post I linked, you should realise that, calculating the inverse comes down to finding $\vek{\nu}_j$'s such that $$ A \begin{pmatrix} & & \\ \vek{\nu}_1 & \vek{\nu}_2 & \vek{\nu}_3 \\ & & \\ \end{pmatrix} = \begin{pmatrix} & & \\ A\vek{\nu}_1 & A\vek{\nu}_2 & A\vek{\nu}_3 \\ & & \\ \end{pmatrix} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{pmatrix} $$ Because then the inverse of $A$ is just the matrix that has the $\vek{\nu}_j$ as column vectors. Since every $\vek{\nu_j}$ gives us a linear combinations of the $\vek{a}_k$'s, we will just try to find linear combinations of the $\vek{a}_k$'s that are equal to $\vek{e}_j$ for $j= 1,2,3$ \begin{align} \vek{e}_1 = \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} &= \vek{a}_1 = A \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} \\ \vek{e}_2 = \begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix} - \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} &= \vek{a}_2 - \vek{a}_1 = A \begin{pmatrix} -1 \\ 1 \\ 0 \end{pmatrix} \\ \vek{e}_3 = \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix} - \begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix} &= \vek{a}_3 - \vek{a}_2 = A \begin{pmatrix} 0 \\ -1 \\ 1 \end{pmatrix} \end{align} So the inverse turns out to be $$ \begin{pmatrix} 1 & -1 & 0 \\ 0 & 1 & -1 \\ 0 & 0 & 1 \\ \end{pmatrix} $$ Just as a check, using the calculations we did above, we get $$ A \begin{pmatrix} 1 & -1 & 0 \\ 0 & 1 & -1 \\ 0 & 0 & 1 \\ \end{pmatrix} = \begin{pmatrix} A \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} & A \begin{pmatrix} -1 \\ 1 \\ 0 \end{pmatrix} & A \begin{pmatrix} 0 \\ -1 \\ 1 \end{pmatrix} \end{pmatrix} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{pmatrix} $$ For $n>3$ you should now be able to see how to get the $\vek{e}_j$'s, using the $\vek{a}_k$'s.

Léreau
  • 3,246
0

$$\begin{cases}\begin{align}x+y+z+t&=a\\ y+z+t&=b\\ z+t&=c\\ t&=d, \end{align}\end{cases}$$

is immediately solved by subtractions

$$\begin{cases}\begin{align}x&=a-b\\ y&=b-c\\ z&=c-d\\ t&=d. \end{align}\end{cases}$$

You can easily construct the inverse (or keep it as an implicit formula).