5

I have searched for my above question and came across the "Trivial intersection of generalised eigenspaces" post on math stack exchange but I do not understand the proof using coprime polynomials. How do I proof such a statement (below) using just the definition of eigenvalues/generalised Eigenspaces?

I have seen/proven that if $\lambda \neq \mu $ . then the intersection between $ E_\lambda(T) \cap K_mu(T) = \{ \mathbf{0} \} $ (where $E_\lambda(T) $ is the Eigenspaces corresponding the eigenvalue $\lambda$. (not sure whether this information is required for the proof)

Let $ T: V \rightarrow V$ be a linear operator where $V$ is a finite dimensional vector space over $ \mathbb{C} $.

I want to prove that $$ \text{If } \lambda \neq \mu, \text{then } K_\mu(T) \ \cap \ K_\lambda(T) = \{\bf{0}\} $$ where $$ K_\lambda(T) = \{ \mathbf{v} \in V : (T-\lambda I_V)^m(\mathbf{v})=\mathbf{0}\} $$ Currently, the lecturer has only gone through the above definition of generalised Eigenspaces (he currently assumes that m need not be the same for different $\mathbf{v} \in K_\lambda(T)$, he has not gone through the prove that m can be chosen to satisfy all $\mathbf{v}$ in the generalised eigenspace yet)

Anyway,

I tried to prove the above statement by contradiction but I got stuck:

Let $ \lambda \neq \mu $ and assume $$ \exists_{non-zero \ vector \ \mathbf{v} \in V}\ \text{such that } v \in K_\mu(T) \cap K_\lambda(T) $$

Then $$ (T-\mu I_V)^m(\mathbf{v}) = \mathbf{0} = (T-\lambda I_V)^n(\mathbf{v}) $$ $$ (T-\mu I_V)^m(\mathbf{v}) = (T-\lambda I_V)^n(\mathbf{v})$$ $$ (T-\mu I_V)^m(\mathbf{v}) - (T-\lambda I_V)^n(\mathbf{v}) =\mathbf{0} $$

And I'm not sure how to proceed.

Thank you for your time!!

user1551
  • 149,263
Ryan Seah
  • 117

3 Answers3

8

I think you indeed need to use Bezout's identity for the polynomials $f(x)=(x-\lambda)^n$ and $g(x)=(x-\mu)^m$ which are clearly coprime.

Bezout's identity says that there exist polynomials $p$ and $q$ such that $pf+qg=1$.

But then $p(T)(T-\lambda I)^n+q(T)(T-\mu I)^m=I$, so applying it to a hypothetical common generalized eigenvector $v$, we receive $$0=p(T)(T-\lambda I)^nv+q(T)(T-\mu I)^mv=Iv=v\,.$$

Berci
  • 92,013
3

A simple minded approach would be to use Cayley Hamilton and Sylvester's Rank Inequality. The former tells you, for matrices in $\mathbb C^{n\times n}$ (or over any algebraically closed field), where $T$ has $m$ distinct eigenvalues
$\mathbf 0 =p\big(T\big) = \big(\lambda_1 I-T\big)^{k_1}\big(\lambda_2 I-T\big)^{k_2}...\big(\lambda_{m-1} I-T\big)^{k_{m-1}}\big(\lambda_m I-T\big)^{k_m}$

we know for $Z:=\big(\lambda_j I-T\big)$ and any natural number $r$
$\dim \ker Z^{r} $
$=\text{geometric multiplicity of eig 0 for }Z^r$
$\leq \text{algebraic multiplicity of eig 0 for }Z^r$
$=\text{algebraic multiplicity of eig 0 for }Z $
$= k_j$

now apply equivalent forms of Sylvester's Rank Inequality twice to get
$n$
$= k_1+k_2 + ....+ k_{m-1}+k_m$
$\geq \sum_{j=1}^{m} \dim\ker\Big(\big(\lambda_j I-T\big)^{k_j}\Big)$
$=\Big(\sum_{j=1}^{m-1} \dim\ker\Big(\big(\lambda_j I-T\big)^{k_j}\Big)\Big)+ \dim \ker \Big(\big(\lambda_m I-T\big)^{k_m}\Big)$
$\geq \dim\ker\Big(\big(\lambda_1 I-T\big)^{k_1}\big(\lambda_2 I-T\big)^{k_2}\dots \big(\lambda_{m-1} I-T\big)^{k_{m-1}}\Big)+ \dim \ker \Big(\big(\lambda_m I-T\big)^{k_m}\Big)$
$\geq \dim\ker\Big(\big(\lambda_1 I-T\big)^{k_1}\big(\lambda_2 I-T\big)^{k_2}...\big(\lambda_{m-1} I-T\big)^{k_{m-1}}\big(\lambda_m I-T\big)^{k_m}\Big)$
$=\dim\ker\Big(\mathbf 0\Big)$
$=n$

The first inequality being met with equality tells us $\dim\ker\Big(\big(\lambda_m I-T\big)^{k_m}\Big)=k_m=\dim\ker\Big(\big(\lambda_m I-T\big)^{n}\Big)$

The final inequality is met with equality so Sylvester tells us
$\dim\ker\Big(\big(\lambda_1 I-T\big)^{k_1}\big(\lambda_2 I-T\big)^{k_2}\dots \big(\lambda_{m-1} I-T\big)^{k_{m-1}}\Big) $
$= \dim\left\{\ker\Big(\big(\lambda_1 I-T\big)^{k_1}\big(\lambda_2 I-T\big)^{k_2}\dots \big(\lambda_{m-1} I-T\big)^{k_{m-1}}\Big)\bigcap \text{image }\big(\lambda_m I-T\big)^{k_m}\Big)\right\}$
$\implies \dim\left\{\ker\Big(\big(\lambda_1 I-T\big)^{k_1}\big(\lambda_2 I-T\big)^{k_2}\dots \big(\lambda_{m-1} I-T\big)^{k_{m-1}}\Big)\bigcap \ker \big(\lambda_m I-T\big)^{k_m}\Big)\right\} = 0$
since $V = \ker \big(T-\lambda_m\big)^n\oplus \text{image } \big(T-\lambda_m\big)^n$

Now suppose $v \in W_m$ and $v \in W_1 + W_2 + \dots + W_{m-1}$
$\implies v \in \ker \Big(\big(\lambda_m I-T\big)^{k_m}\Big)$ and $v \in \ker \Big(\big(\lambda_1 I-T\big)^{k_1}\big(\lambda_2 I-T\big)^{k_2}\dots \big(\lambda_{m-1} I-T\big)^{k_{m-1}}\Big)$
$\implies v = \mathbf 0 $
$\implies V = W_1 \oplus W_2 \oplus \dots \oplus W_{m-1} \oplus W_m$
since the labeling of eigenvalues is arbitrary.
(That is: for $w_j \in W_j$, if $\sum_j w_j = \mathbf 0$ with some $w_i\neq 0$, set $v:= w_i$ and then re-label $\lambda_i$ and $\lambda_m$, to get a contradiction.)

This is a stronger result than what the OP asked for. I.e. we can restrict to the vector (sub)space $V':= W_i \oplus W_j = W_i + W_j$ for $i\neq j$ to recover $\dim V' = \dim W_i + \dim W_j = \dim W_i + \dim W_j + \dim\big(W_i \cap W_j\big)\implies\dim\big(W_i \cap W_j\big)=0$

user8675309
  • 12,193
  • I don't see how you can use geometric multiplicity of $\lambda_j$ to establish an upper bound on $\dim \ker \big(\lambda_j I-T\big)^{k_j}$. The dimension of that nullspace can be bigger than the geometric multiplicity of the eigenvalue. For example, $\begin{bmatrix} 1 & 1 \ 0 & 1 \end{bmatrix}$ has a double eigenvalue $(1 - \lambda)^2$, but only one eigenvector $[1, 0]$ (geometric multiplicity is 1). – vladimirm Jul 10 '23 at 02:29
  • 1
    @vladimirm - I use the algebraic multiplicity as an upper bound, not the geometric multiplicity. The argument comes from the fact that the geometric multiplicity is always bounded above by the algebraic multiplicity so $\dim \ker Z^{r} =\text{geo multiplicity of eig 0 for }Z^r \leq $ $\text{alg multiplicity of eig 0 for }Z^r =\text{alg multiplicity of eig 0 for }Z = k_j$. Your statement "The dimension of that nullspace can be bigger than the geometric multiplicity of the eigenvalue" is false for the eigenvalue zero; the dimension of the nullspace is the geometric multiplicty of eig 0. – user8675309 Jul 10 '23 at 15:46
  • 1
    Ah! Got it. Thanks for the explanation! I missed the fact that you were talking about eigenvalue 0. – vladimirm Jul 11 '23 at 12:19
2

For the case when $\mu$ and $\lambda$ are eigenvalues of $T$, there is a nice one-line proof (if you know that generalised eigenvectors corresponding to distinct eigenvalues are linearly independent):

Suppose for a contradiction that $0\neq v\in K_\mu(T) \cap K_\lambda(T)$ with $\lambda \neq \mu$. Then $v$ is a generalised eigenvector corresponding to $\mu$ as well as a generalised eigenvector corresponding to $\lambda$, so $v$ and $v$ are linearly independent - contradiction. So from $\{0\} \subset K_\mu(T) \cap K_\lambda(T)$ we conclude that $K_\mu(T) \cap K_\lambda(T) = \{0\}$. $\square$

(The general case was very nicely proved in earlier answers :)

Vadim
  • 360