6

Proposition. Let $T\colon V \to V$ be a linear operator. If $v_1, v_2, \ldots, v_m$ are eigenvectors of $T$ that belong to distinct eigenvalues, then they are linearly independent.

The usual proofs are (1) by induction, (2) via a Vandermonde matrix. I would like to suggest a proof which looks much more intuitive to me, see below. I did not find it in the literature and would be grateful for any reference and for any comments (are you agree that this proof is more intuitive and natural than the "classical" proofs?).

Proof. Assume that $v_1, v_2, \ldots, v_m$ are linearly dependent. Let $W:=\mathsf{span}(v_1, v_2, \ldots, v_m)$. Denote $k:=\mathsf{dim}(W)$, then we have $k<m$. Let $S\colon W \to W$ be the restriction of $T$ on $W$. Then $v_1, v_2, \ldots, v_m$ are eigenvectors of $S$. This means that $S$, a linear operator in a $k$-dimensional vector space, has $m$ eigenvalues, where $m>k$, which is not possible.

Caligari
  • 131
  • Hmm, this makes sense to me. That's surprisingly more efficient than any other proof I've seen, I hope it's correct. I suppose you should also mention the trivial fact that $S$'s codomain can be taken to be $W$ too – FShrike May 27 '23 at 17:02
  • 6
    It seems like this maybe be circular, depending on how you prove that a $T:W\to W$ cannot have more eigenvalues than the dimension of $W.$ The most obvious way to prove that uses the linear independence of the eigenvectors. The results are clearly related... – Thomas Andrews May 27 '23 at 17:15
  • 4
    @ThomasAndrews: $T-\lambda \mathbf{1}$ has a non-zero kernel iff it's singular iff $p(\lambda) = \operatorname{det}(T-\lambda \mathbf{1}) = 0$. $p(\lambda)$ is a polynomial of degree $\operatorname{dim} W$. So far all this stuff can be proved with any basis, no need for eigenvectors. Finally need a result from field theory that a polynomial of degree $k$ can have at most $k$ roots. – Chad K May 27 '23 at 18:09
  • Ah, that definitely works. But the determinant and results about it are a pretty big hammer for an elementary result. From a pedagogic point of view, I wouldn't be surprised if determinants aren't usually covered after this result. But it definitely seems less circular. – Thomas Andrews May 27 '23 at 18:35
  • I have shown here that if $p_1,\ldots,p_n$ are mutually coprime polynomials, and $v_1,\ldots,v_n$ are vectors $\neq 0$ satisfying $p_i(T)\cdot v_i = 0;;\forall i\in{1,\ldots,n}$ then the vectors $v_1,\ldots,v_n$ are linearly independent. No induction and no Vandermonde matrix. We only have to know that the polynomials $(x-\lambda_1), (x-\lambda_2), \ldots$ are coprime if the values $\lambda_1, \lambda_2,\ldots$ are distinct. But this is obvious imho. – Reinhard Meier May 27 '23 at 19:25
  • @FShrike Thank you for the comment. To me, calling a linear map an "operator" already means that domain=codomain, but I know that this convention is not universal, and it is better to write $S\colon W\to W$, so I will edit. – Caligari May 28 '23 at 08:28
  • I agree that your proof is more natural than any using induction or Vandermonde matrices. However one "classical" proof I know of is to multiply the equation $$\sum_{i=1}^mc_iv_i=0$$ by $$\prod_{k\ne j}\big(T-\lambda_kI)$$ for each $\ j\ $, giving $$\prod_{k\ne j}\big(\lambda_j-\lambda_k\big)c_jv_j=0$$ and hence $\ c_j=0\ $. – lonza leggiera May 28 '23 at 13:49
  • @lonzaleggiera I have never seen this one, thank you! To me, it uses the same idea as the inductive one, but it avoids induction in a very elegant way. – Caligari May 29 '23 at 22:00

0 Answers0