3

Let A be an infinite matrix with all its first column elements equal to 1 and the rest of them equal to 0.

A=\begin{pmatrix} 1 & 0 & 0 & 0 & \cdots\\ 1 & 0 & 0 & 0 &\cdots\\ 1 & 0 & 0 & 0 & \cdots\\ \vdots & \vdots & \vdots & \vdots & \ddots \end{pmatrix}

Can A be diagonalized?

2 Answers2

3

Since you are in infinite dimensions, you would first need to specify in which space the operator $A$ is supposed to act, then you can try to prove that it fulfils the assumptions for the spectral theorem.

If we first look at the action of $A$ on an arbitrary sequence of real (I'm assuming that you are working in $\mathbb{R}$) numbers $a=(a_1, a_2,...)$ we see that $A(a) = (a_1, a_1, ...)$ which won't be e.g. in $l^2$, the natural Hilbert space of sequences.

Actually, $A$ might seem to only make sense in $l^\infty$ (but it doesn't, as @MartinArgerami points out, because we don't have a countable basis with which to interpret what the action of $A$ on an arbitrary vector $u \in l^\infty$ is), and this is definitely not Hilbert. Because we then lack the notion of a scalar product, we cannot define what orthogonal eigenspaces would be, hence no orthogonal diagonalisation.

Note however that we can formally find another "infinite matrix" $P$ such that $P^{-1}$ "exists" in some sense and $D = P A P^{-1}$ is a diagonal infinite matrix, namely

$$D = \left(\begin{array}{ccccc} 1 & & & & \\ 0 & 0 & & & \\ 0 & 0 & 0 & & \\ 0 & 0 & 0 & 0 & \\ \vdots & & & & \ddots \end{array}\right)$$

with

$$P^{- 1} = \left(\begin{array}{ccccc} 1 & & & & \\ 1 & 1 & & & \\ 1 & 0 & 1 & & \\ 1 & 0 & 0 & 1 & \\ \vdots & & & & \ddots \end{array}\right),\ \ P = \left(\begin{array}{ccccc} 1 & & & & \\ - 1 & 1 & & & \\ - 1 & 0 & 1 & & \\ - 1 & 0 & 0 & 1 & \\ \vdots & & & & \ddots \end{array}\right).$$

Edit: If you are wondering where those matrices came from, it was basically this: it is natural to see how $A$ acts on the canonical basis, and one immediately sees that $A(e_1)=u=(1,1,1,1,...)$ is an eigenvector with eigenvalue 1 and that $A(e_i)=0$ for all $i>1$, so $e_i$ are eigenvectors with eigenvalue 0. You want $P$, $P^{-1}$ such that $D=P A P^{-1}$, where $P^{-1}$ is a change from the "new" basis of eigenvectors into the "old", i.e. the matrix with columns $u, e_2, e_3, ...$ Compute its "inverse" $P$, see if $D$ is all zeros except in the first entry, and you are done. But again, this is all formal and quite wrong, since we don't have a basis to begin with. See Martin's answer for more.

Miguel
  • 1,680
  • Why is the orthogonality necessary for diagonalization? It seems to me that you already did it. – Ian Apr 15 '17 at 19:49
  • Well, you are right that it is not necessary if you don't define it to be! What I meant is that we have no orthogonal diagonalisation (with an orthogonal change of basis), as in the spectral theorem, which is what I always think of when I hear diagonalisation. I'll edit the answer to fix it, thanks. – Miguel Apr 15 '17 at 20:09
  • @Miguel: note that your $P$ does not define an operator on $\ell^\infty$ (you cannot apply it to $(1,0,0,\ldots)^T$, for instance). – Martin Argerami Apr 15 '17 at 20:33
  • @MartinArgerami: I'd say that the matrix form precisely means $P(e_1) = (1,-1,-1,...) \in l^\infty$, $P(e_i) = e_i \in l^\infty$ for all $i>1$. But I think that I understand what you mean: there is a problem with the whole idea of extending $P$ beyond the ${e_i}$ by linearity. Indeed since the ${e_i}$ are not a Schauder basis for $l\infty$, i.e. $\sum u_i e_i$ won't converge in norm to $u$, the whole thing is flawed. So in this sense, taking $l^\infty$ doesn't make any sense at all, you are right. I'll edit the question again, thanks! – Miguel Apr 15 '17 at 21:20
1

The question is a big vague: how do you multiply arbitrary infinite matrices? You need some restrictions, that will affect when such a "matrix" is invertible (and you need that notion to talk about diagonalization). Or, if you express diagonalization as the existence of a basis of eigenvectors, you need to tells on which space does $A$ act.

Think of it this way: it is clear that one expects the spectrum of $A$ to be $\{0,1\}$. But, note that $A$ has no eigenvalue for $1$: to have $Ax=x$ for nonzero $x$, you would need $$ x=\begin{bmatrix}1\\1\\1\\ \vdots\end{bmatrix} $$ and then $Ax$ is not defined. And that's the problem: your "matrix" $A$ does not define an operator if you want to generalize the usual action of matrices on vectors.

Martin Argerami
  • 217,281