2

Let $L$ be a semisimple Lie algebra. I am trying to understand root space decomposition of $L$ on my own. Since $L$ is semisimple, $L$ possesses an abelian maximal toral subalgebra i.e. an abelian subalgebra $H$ which is $\text {ad}$-semisimple, known as Cartan subalgebra. But then $\text {ad} (H)$ is a commuting family of semisimple operators on $L$ and hence they are simultaneously diagonalizable. So there exists a basis $\{e_1, \cdots, e_n \}$ of $L$ and $\lambda_i \in H^{\ast}$ corresponding to each basis element $e_i$ such that $$[h, e_i] = \lambda_i (h) e_i$$ for all $h \in H.$ Define $$L_{\lambda_i} : = \left \{x \in L\ |\ [h, x] = \lambda_i (h) x\ \text {for all}\ h \in H \right \}.$$

Then it's clear that $L = \sum\limits_{i = 1}^{n} L_{\lambda_i}.$ But I can't see why the sum is direct. First of all how can conclude that all the $\lambda_i$'s are distinct? Because if for some $i \neq j$ we have $\lambda_i = \lambda_j$ then clearly $L_{\lambda_i} = L_{\lambda_j}.$ But then the sum won't be direct. So at first we have to somehow show that all $\lambda_i$'s are distinct. If they are distinct they are all one dimensional.

In particular, if the sum is direct then $H$ is also one-dimensional. Is it always the case?

Could anyone please answer the questions? Also please let me know where I am going wrong if there is any.

Thanks for your time.

ACB
  • 3,068
  • The most difficult item to show is $\dim L_{\lambda_i}=1$ – kabenyuk Feb 28 '23 at 12:10
  • @kabenyuk$:$ How to show that the sum is direct or equivalrntly $\dim L_{\lambda_i} = 1$ for all $i\ $? Also there exists $i \in {1, \cdots, n }$ such that $\lambda_i = 0$ which correspond to the centralizer of $H$ in $L$ and it is well known that this centralizer is $H$ itself. So if the sum is direct doesn't it imply that $H$ has to be one dimensional? – ACB Feb 28 '23 at 12:43
  • I didn't write, but $L_0=H$ and the dimension of $H$ can be as large as you like. The proof that $\dim L_\lambda=1$ for non-zero $\lambda$ uses some facts about representations of the algebra $sl_2$. It is written in many books. I don't know which book you are reading. – kabenyuk Feb 28 '23 at 12:57
  • Check out Dietrich Burde's answer here, it might help you. – kabenyuk Feb 28 '23 at 13:05
  • @kabenyuk$:$ In the linked answer the author wrote that $\sum\limits_{i = 1}^{n} L_{\lambda_i}$ is direct because the eigenvectors in different eigenspaces are linearly independent. Here $L_{\lambda_i}$ is not like an eigenspace as eigenvalues keep on changing as we vary the elements of $H.$ I don't understand what exactly he tried to mean. – ACB Feb 28 '23 at 13:47
  • Related: https://math.stackexchange.com/q/2095754/96384 – Torsten Schoeneberg Feb 28 '23 at 16:09
  • @AnilBagchi. The eigenvalues change but but the eigenspaces are not. This is where the commuting semisimple elements (i.e. toral) condition is coming in. They all preserve each others eigenspaces. – Callum Feb 28 '23 at 16:16
  • @Callum$:$ I agree that the eigenspace of $\text {ad} (h)$ corresponding to the eigenvalue $\lambda_i (h)$ is $L_{\lambda_i}$ for all $h \in H.$ So as $h$ varies eigenvalues will vary but the corresponding eigenspaces are the same in every case. But in order to show that the elements of $L_{\lambda_i}$ are linearly independent we need to find out a fixed $h\in H$ such that the corresponding eigenvalues are all distinct i.e. there exists some $h\in H$ such that $\lambda_i (h)\neq \lambda_j(h).$ Then only we can conclude that the corresponding eigenvectors $\lambda_i$'s are linearly independent. – ACB Feb 28 '23 at 16:29
  • @Callum$:$ How to even show that $\lambda_i$'s are all distinct? – ACB Feb 28 '23 at 16:32
  • @TorstenSchoeneberg$:$ In my case how to show that $\lambda_i$'s are distinct and the elements of $L_{\lambda_i}$'s are linearly independent? – ACB Feb 28 '23 at 16:34

1 Answers1

1

With your definitions / setup / notations, the $\lambda_i$ are not yet distinct in general: Because if we choose the basis $e_1, ..., e_n$ so that the first $r$ elements $e_1, ... e_r$ are a basis of $H$, which is its own $0$-space a.k.a. centralizer, then $\lambda_1 = ... = \lambda_r = 0 \in H^*$.

The correct way to define the root space decomposition starting from your notation is to define $R:= \{\lambda_i: 1\le i \le n\} \color{red}{\setminus \{0\}}$, and write

$$L = L_0 \oplus \bigoplus_{\alpha \in R} L_{\alpha}$$

Note that I'm kind of cheating here because I have now made the $\alpha$ mutually distinct by definition. From there, the standard way to proceed is to note $H=L_0$ and then show that each of the "proper" root spaces $L_\alpha$ ($\alpha \in R)$ has dimension $1$ -- which is more involved than one might think, cf. Why are root spaces of root decomposition of semisimple Lie algebra 1 dimensional? and When is the Lie bracket of two root spaces nonzero? including links from there, also discussion in https://math.stackexchange.com/a/4583911/96384.

Once one has that, of course it is clear in hindsight that the $\color{red}{\text{nonzero}}$ $\lambda_i$, i.e. in my above notation the $\lambda_i$ with $r+1 \le i \le n$, were mutually distinct.

After that comes the real fun in showing that $R$ is a root system in a natural way.

I also advertise my own answer here for better understanding of the root space decomposition by seeing it in a basic but telling matrix example. That should clear things up.

  • How do you know that the sum $L_0 + \sum\limits_{\alpha \in R} L_{\alpha}$ is direct? How do you even know that $L_{\alpha}$'s are all distinct for $\alpha \in R.$ – ACB Feb 28 '23 at 19:52
  • In order to show that the sum is direct I think it's enough to show that the elements of $L_{\alpha}$'s are linearly independent. If for any $\alpha \neq (0)$ we can show that either $L_{\alpha} = (0)$ or $1$-dimensional then that would imply the sum is indeed direct. – ACB Feb 28 '23 at 20:20
  • So what follows it turns out that if $L$ is a semisimple Lie algebra of dimension $d$ then there exists a basis ${h_1, \cdots, h_m}$ of $H$ which extends to a basis ${h_1, \cdots, h_m, e_1,\cdots, e_{d - m} }$ of $L$ consisting of simultaneous eigenvectors of $\text {ad} (H).$ So for any $\alpha= 1,\cdots,d-m$ there exists $\lambda_{\alpha}\in H^{\ast} \setminus{0}$ such that $[h, e_{\alpha}]=\lambda_{\alpha} (h) e_{\alpha}$ for all $h\in H.$ So according to me the root set $\Phi$ of $L$ is $$\Phi : = \left {\lambda_{\alpha}\ |\ \alpha \in {1, \cdots, d - m } \right }.$$ Am I right? – ACB Feb 28 '23 at 20:39
  • 1
    @AnilBagchi. You don't want to prove 1-dimensional before you see the sum is direct. The sum is direct because that is what it means for $\mathfrak{h}$ to be simultaneously diagonalisable – Callum Mar 02 '23 at 15:51
  • @Callum$:$ Please don't deviate me from where I am by means of intricate languages. By simultaneous diagonalizability what I know is that there exists a basis of $L$ such that each and every operator in $\text {ad} (H)$ is diagonal with respect to it. From here how do you prove that the sum $\sum\limits_{\alpha \in \Phi} L_{\alpha}$ is direct? Don't assume anything. Try to prove it from scratch. Expressing in words are sometimes very misleading and non-rigorous. – ACB Mar 02 '23 at 22:12
  • @AnilBagchi. What does it mean for an operator to be diagonal with respect to a basis? It means that that basis is precisely a basis of eigenvectors for that operator. To be simultaneously diagonalisable we have a basis that works for all of $\mathfrak{h}$ at once. At this point, we don't know whether some of these basis vectors have the same "generalised eigenvalue" (a.k.a. weight or root) $\alpha$ as each other and so some of our $L_\alpha$ could in theory be bigger than 1-dimensional ($L_0$ certainly is). However, as a basis, they are already linearly independent. – Callum Mar 02 '23 at 22:56
  • 1
    Exactly. Anil, if you already have your basis ${h_1, ..., h_m, e_1, ..., e_{d-m} }$, then order it further so that $e_1, ..., e_{i_1}$ belong to (and hence span) $L_{\alpha_1}$, $e_{i_1+1}, ..., e_{i_2}$ belong to (hence span) $L_{\alpha_2}$ etc. Then it is obvious that the sum of those $L_\alpha$ is direct, they are disjoint spans of mutually disjoint subsets of your basis vectors. And yes, in hindsight, once you have proved they are each one-dimensional, that means $i_j = j$ for all $1 \le j \le d-m$, there are $d-m$ roots, and the root system is as you describe. – Torsten Schoeneberg Mar 03 '23 at 02:28
  • @Callum$:$ If an operator $T$ is diagonalizable with respect to a basis then the underlying vector space can be decomposed as a direct sum of eigenspaces of $T.$ – ACB Mar 03 '23 at 10:18
  • @Callum$:$ How do you prove here that for different $\alpha$'s the elements of $L_{\alpha}$'s are linearly independent? For that you need to find some $h \in H$ (independent of the roots) such that $\alpha (h) \neq \beta (h)$ for all $\alpha, \beta \in \Phi$ with $\alpha \neq \beta\ $? Here $\Phi$ indicates the root set of $L$ with respect to the Cartan subalgebra $H$ of $L.$ I first want an answer to this question. – ACB Mar 03 '23 at 10:26
  • 1
    @AnilBagchi. Did you read my comment? – Torsten Schoeneberg Mar 03 '23 at 15:51