2

I am curious about computing the root system (as linear functionals) using the minimum prerequisite knowledge and straightforward computation. For example, for Type A the eigenvalues of $ad_{H}$ can be directly computed.

In Hall's Lie Groups, Lie Algebras, and Representations: An Elementary Introduction it is computed by complexification.

In Rossman's Linear Groups computing Type B-D is skipped.

From my understanding, I felt like the process is (i) Fix an appropriate Cartan subalgebra (ii) Finding out the roots(as eigenvals) of $ad_{H}$. (iii) Find out the corresponding root vectors and done. (*)

Unfortunately, I haven't found many working examples and most resources just tell you what the roots and root vectors are directly. So I'm not sure whether (*) will work. After a few trials, I realized many problems might occur in this process, for one, the Cartan subalgebra is not unique and there are many conjugates of them.

Can somebody enlighten me?

Dinoman
  • 908
  • @MarianoSuárez-Álvarez Yes. But carrying out the plan in all Type B-D doesn't seem to work, and I didn't know why. – Dinoman Dec 28 '23 at 18:54
  • @MarianoSuárez-Álvarez May I ask i. Will any Cartan subalgebra do the job? ii. If we do the diagonal trick to obtain an easier Cartan subalgebra, we will only have eigenvals in the form $\lambda_i - \lambda_j$, so step (ii) already failed. Why? – Dinoman Dec 28 '23 at 19:00
  • @MarianoSuárez-Álvarez Thanks a lot!! I think I have located the source of my misunderstanding – Dinoman Dec 28 '23 at 19:13
  • @MarianoSuárez-Álvarez Sure. Later today I will post one answer! – Dinoman Dec 28 '23 at 19:36
  • Here are a few ways to approach this (in the case of $C_n$ but the idea applies more generally). – Callum Dec 28 '23 at 23:38

4 Answers4

6

I have been feeling a bit rusty with Lie algebra basics lately, so I decided to do $\mathfrak{so}_5$ as a refresher. Also as an excuse to review some material from Humphreys (R.I.P. — with my deepest eternal respect).

To avoid the trap of using shortcuts like assuming that we get a Cartan subalgebra by including the diagonal matrices, I picked the $SO_5(\Bbb{R})$-route with the preserved quadratic form coming from the Euclidean norm. In other words, I will declare $$ L:=\mathfrak{so}_5=\{A\in M_{5\times5}(\Bbb{C})\mid A^T=-A\}. $$ You see that I did complexify the Lie algebra. Because complex eigenvalues appear, this is prudent.

For all pairs of indices, $1\le i,j\le 5$, I denote by $E_{ij}$ the matrix with a single non-zero entry $1$ at position $(i,j)$. Then a basis for $L$ consists of the $10$ matrices $$ S_{ij}:=E_{ij}-E_{ji}, 1\le i<j\le5. $$ To simplify the formulas I also adopt the convention $S_{ij}:=-S_{ji}$ whenever $j<i$. It is then a simple matter to calculate the commutator formula $$ [S_{ij},S_{k\ell}]=\delta_{jk}S_{i\ell}+\delta_{i\ell}S_{jk}+\delta_{ki}S_{\ell j}+\delta_{\ell j} S_{ki}. $$ But all we really need is the realization that if $\{i,j\}\cap\{k,\ell\}=\emptyset$ then $S_{ij}$ and $S_{k\ell}$ obviously commute, and if the index pairs intersect in a singleton set, then we have a copy of $\mathfrak{so}_3$ with the familiar commutator relations $$ [S_{12},S_{23}]=S_{13},\quad [S_{23},S_{31}]=S_{21},\quad [S_{31},S_{12}]=S_{32}, $$ where we can put any distinct indices $i,j,k$ in place of $1,2,3$.

Next I want to prove the claim that all the elements $S_{ij}, 1\le i<j\le5$ are $\mathrm{ad}$-semisimple. Or, the linear transformations $\mathrm{ad}(S_{ij}):L\to L, x\mapsto [S_{ij},x]$ are diagonalizable. Let's look at $\mathrm{ad}(S_{12})$, the others are gotten with the appropriate index substitutions. While at it, I will write down the eigenspaces of $\mathrm{ad}(S_{12})$ as we will be needing them shortly. I will denote by $V_\lambda$ the eigenspace belonging to the eigenvalue $\lambda$:

  • The space $V_0$ is $4$-dimensional, and spanned by $S_{12},S_{34},S_{35},S_{45}$. The sets of subscripts intersect fully or trivially.
  • The matrices $S_{12},S_{13},S_{23}$ span a copy of $\mathfrak{so}_3$. We see easily that $$[S_{12},S_{13}+iS_{23}]=i(S_{13}+iS_{23})$$ and $$[S_{12},S_{13}-iS_{23}]=-i(S_{13}-iS_{23}].$$ So within this span the eigenvalues $\pm i$ both have multiplicity $1$. Observe that $S_{12}$ itself spans the intersection of this copy of $\mathfrak{so}_3$ and $V_0$.
  • The exact same thing happens in the other copies of $\mathfrak{so}_3$ respectively spanned by $S_{12},S_{14},S_{24}$ and $S_{12},S_{15},S_{25}$.
  • Hence we have found two 3-dimensional eigenspaces of $\mathrm{ad}(S_{12})$: $$V_i=\langle S_{13}+iS_{23},S_{14}+iS_{24},S_{15}+iS_{25}\rangle$$ and $$V_{-i}=\langle S_{13}-iS_{23},S_{14}-iS_{24},S_{15}-iS_{25}\rangle.$$

The dimensions of the eigenspaces add up to $10=\dim L$, so semisimplicity of $\mathrm{ad}(S_{12})$ follows.

Similarly, let me denote by $W_\mu$ the eigenspaces of $\mathrm{ad}(S_{34})$. These are (do the obvious index substitutions): $$\begin{aligned} W_0&=\langle S_{34},S_{12},S_{15},S_{25}\rangle,\\ W_i&=\langle S_{35}+iS_{45},S_{31}+iS_{41},S_{32}+iS_{42}\rangle,\\ W_i&=\langle S_{35}-iS_{45},S_{31}-iS_{41},S_{32}-iS_{42}\rangle. \end{aligned} $$

As $S_{12}$ and $S_{34}$ are both (ad-)semisimple, and commute, they span a toral subalgebra $$H=\langle S_{12},S_{34}\rangle.$$ It is not immediately obvious that this is a maximal toral subalgebra, but that claim does follow from the fact that any toral subalgebra is abelian (Humphreys, p.35), and the observation that $H=V_0\cap W_0$. After all, $V_0\cap W_0$ is the subspace of $L$ consisting of the elements that commute with both $S_{12}$ and $S_{34}$.

The root spaces (with respect to this choice of $H$) are then the common eigenspaces of $\mathrm{ad}(S_{12})$ and $\mathrm{ad}(S_{34})$ such that at least one of the eigenvalues is non-zero.

We very easily spot that in this case all the intersections $V_\lambda\cap W_\mu$, $(\lambda,\mu)\neq(0,0)$, are actually non-trivial and (as expected) $1$-dimensional: $$ \begin{aligned} V_0\cap W_i&=\langle S_{35}+iS_{45}\rangle,\\ V_0\cap W_{-i}&=\langle S_{35}-iS_{45}\rangle,\\ V_i\cap W_0&=\langle S_{15}+iS_{25}\rangle,\\ V_{-i}\cap W_0&=\langle S_{15}-iS_{25}\rangle,\\ V_i\cap W_i&=\langle S_{13}+iS_{14}+iS_{23}-S_{24}\rangle,\\ V_i\cap W_{-i}&=\langle S_{13}-iS_{14}+iS_{23}-S_{24}\rangle,\\ V_{-i}\cap W_i&=\langle S_{13}+iS_{14}-iS_{23}+S_{24}\rangle,\\ V_{-i}\cap W_{-i}&=\langle S_{13}-iS_{14}-iS_{23}+S_{24}\rangle.\\ \end{aligned} $$ So if we identify $\lambda\in H^*$ with the vector $(\lambda(S_{12}),\lambda(S_{34}))$, the roots are $(\pm i,0)$, $(0,\pm i)$, $(\pm i,\pm i)$ (all four sign combinations in the last). Denoting $$ \alpha=(i,-i)\quad\text{and}\quad\beta=(0,i) $$ we can rewrite the roots as $\alpha,\beta,\alpha+\beta,\alpha+2\beta$ and their negatives, so we have a copy of the root system $B_2$.

Jyrki Lahtonen
  • 140,891
  • The calculation in the second bullet is surely familiar to all who have seen a quantum mechanical treatment of angular momentum. Hopefully you recognized the variants of the ladder operators. – Jyrki Lahtonen Dec 29 '23 at 13:52
5

I thought I'd add a quick overview of my preferred way to do this as I mention in my answer linked in my comment above.

Specifically you can do all of this using the defining representations but without quite choosing a basis, obtaining all the root spaces (and weight spaces of many basic representations) without needing explicit matrices.

Note the Cartan subalgebra is diagonalisable (over $\mathbb{C}$). Indeed, this is true over any representation. Intrinsically, this means that the representation can be decomposed into lines on which the Cartan subalgebra acts as scalars. In effect, choosing a diagonalising basis only up to scale. Conversely, the lines determine the Cartan subalgebra.

For $\mathfrak{sl}_n$ any choice of linearly independent lines will do: $$ \mathbb{C}^n = L_1 \oplus \dotsm \oplus L_n$$ gives a Cartan subspace as the tracefree elements in the span of $\bigoplus_i L_i^* \otimes L_i$. And the root spaces are easy to guess as each $L_i^* \otimes L_j$ where $i\neq j$

For $\mathfrak{so}_n$ we must be pickier. Playing with $\mathfrak{so}_2$ it should become clear that the two lines in that case are null lines for the symmetric bilinear form. We can then decompose $\mathbb{C}^n$ into orthogonal subspaces of dimension 2, with a dimension 1 subspace left over in the case $n$ is odd. Consequently our choice of lines must be a series of pairs of null lines which are orthogonal to each other pair (but within each pair they are not orthogonal) plus one extra non-null line in the odd case. So denote these $L_1, L_{-1}, L_2, L_{-2},\dots$ plus $L_0$ for the extra line and we have $L_i \perp L_j$ unless $j = -i$. Next we should note that $\mathfrak{so}_n$ is isomorphic to $\bigwedge^2\mathbb{C}^n$ via: $$ x \wedge y \mapsto (x,\cdot) y - (y,\cdot)x$$ We can then find our Cartan subalgebra by considering what elements of $\bigwedge^2\mathbb{C}^n$ are diagonal in this basis. Observe $L_i \wedge L_j$ sends $L_{-i}$ to $L_j$, $L_{-j}$ to $L_i$ and is $0$ on each other line. So in particular $L_i \wedge L_{-i}$ preserves $L_i$ and $ L_{-i}$ and is $0$ elsewhere. Thus our Cartan subalgebra is: $$ \mathfrak{h} = \bigoplus_i L_i \wedge L_{-i}.$$ Similarly, by calculating $[L_i \wedge L_{-i}, L_j \wedge L_k] \subset L_j \wedge L_{k}$ (to help with this you can prove $[x\wedge y, v \wedge w] = (x\wedge y)(v) \wedge w - v \wedge (x\wedge y)(w)$)you can see that $L_j \wedge L_k$ are our root spaces for $k \neq j, -j$. For $n$ even all the roots are the same length but if $n$ is odd we have a set of short roots whose root spaces are exactly the $L_0 \wedge L_j$.

You can follow this exact reasoning for $\mathfrak{sp}_{2n}$. Again we must choose null lines, this time for the symplectic form $\omega$, organised into pairs $L_1, L_{-1}, L_2, L_{-2},\dots, L_n, L_{-n}$. Of course, for a symplectic form all lines are null but we still want the orthogonal relations we had last time so $L_i \perp L_j$ unless $j = -i$. Note this is, in effect, picking a Darboux basis for the space but again only up to scale. Again we can relate $\mathfrak{sp}_{2n}$ to tensors but instead of the antisymmetric tensors this time we use the symmetric ones: $S^2\mathbb{C}^n \cong \mathfrak{sp}_{2n}$ via: $$ x \odot y \mapsto \omega(x,\cdot) y + \omega(y,\cdot)x. $$

We follow the same plan as the previous time (the bracket rules for $S^2\mathbb{C}^n$ are basically the same again) to find the Cartan subalgebra as $$ \mathfrak{h} = \bigoplus_i L_i \odot L_{-i} $$

and the roots spaces as $L_i \odot L_j$ where $j \neq -i$. The long roots are those with spaces of the form $L_i \odot L_i$.

You can even apply this to the exceptional Lie algebras as well by recognising that they contain maximal rank classical ones. For example, $\mathfrak{e}_7$ contains $\mathfrak{sl}_8$ and can in fact be written as $\mathfrak{sl}_8 \oplus \bigwedge^4\mathbb{C}^8$ as a $\mathfrak{sl}_8$-module and the root spaces are the weight spaces for this action. But this all gets a bit complex to show so I will leave it there.

Callum
  • 6,299
3

Here is a summary of Humphreys's treatise with $G_2$ as an example. It's not a classical Lie algebra but more interesting than the usual example $\mathfrak{sl}(2)$ and still manageable for the internet: https://www.physicsforums.com/insights/lie-algebras-a-walkthrough-the-structures/

However, it would be necessary to study Humphreys's book to value the beauty of the geometry behind the concept of classification.

Marius S.L.
  • 2,501
2

$\mathfrak{so}(2n+1), \mathfrak{sp}(2n), \mathfrak{so}(2n)$ are defined using a matrix $s$ describing a bilinear form as the set $\{x:sx=-x^\top s\}$. The key is to use a convenient $s$. For example, if you use $s=I_{2n}$ or $s=I_{2n+1}$ for $\mathfrak{so}$ you get no diagonal matrices - which makes calculation difficult.

The following are the $s$ used in Humphreys' Introduction to Lie Algebras and Representation Theory, pages 2-3.

  • For $\mathfrak{so}(n+1,n)$ $$ s_B=\begin{pmatrix} 1&0&0\\ 0&0&I_n\\ 0&I_n&0 \end{pmatrix} $$
  • For $\mathfrak{sp}(2n)$ $$ s_C=\begin{pmatrix} 0&I_n\\ -I_n&0 \end{pmatrix} $$
  • For $\mathfrak{so}(n,n)$ $$ s_D=\begin{pmatrix} 0&I_n\\ I_n&0 \end{pmatrix} $$

When using these matrices to find all the $x$ satisfying $sx=-x^\top s$, and extracting a basis, you get diagonal matrices that can be used for a Cartan subalgebra, with other basis elements root vectors.

Edit: Over $\Bbb{R}$, $s_B$ has signature $(n+1,n)$ and $s_D$ signature $(n,n)$ as symmetric bilinear forms. However, $\Bbb{C}\otimes_{\Bbb{R}}\mathfrak{so}(p,q;\Bbb{R})\cong\mathfrak{so}(p,q;\Bbb{C})\cong\mathfrak{so}(p+q;\Bbb{C})$. Unless the Lie algebra has a Cartan sub-algebra with a basis of matrices having real eigenvalues, the root-space decomposition must be done over $\Bbb{C}$, since real anti-symmetric matrices have imaginary eigenvalues.

Example for $\mathfrak{so}(n,n)$: a matrix $x$ such that $s_D x=-x^\top s_D$ has the form $$ x= \begin{pmatrix} A&B\\C&-A^\top \end{pmatrix} $$ with $B=-B^\top, C=-C^\top$. So take the following basis for $\mathfrak{so}(n,n)$

  • Using $A$ only with $B=C=0$, diagonal matrices $e_{i,i}-e_{i+n,i+n}$ for $1\le i\le n$. These span a Cartan sub-algebra.
  • Using $A$ only with $B=C=0$, $e_{i,j}-e_{j+n,i+n}$ for $1\le i,j\le n$ and $i\ne j$. These are root vectors.
  • Using $B$ only with $A=C=0$, $e_{i,j+n}-e_{j,i+n}$ for $1\le i<j\le n$. These are root vectors.
  • Using $C$ only with $A=B=0$, $e_{i+n,j}-e_{j+n,i}$ for $1\le i<j\le n$. These are root vectors.

The matrices $e_{i,i}-e_{i+n,i+n}$ are all diagonal, mutually commuting and linearly independent. Since they're semisimple, they're ad-semisimple and span a toral sub-algebra of dimension $n$.

To verify the other vectors are root vectors, we note first that half of them are $\pm$ transposes of the other half, and using the identity $[M,D]^\top = [D^\top, M^\top]$ we only need to check half. We take $e_{i,j}-e_{j+n,i+n}, e_{i,j+n}-e_{j,i+n}$ for $i<j$ for that. It can be verified that $$ \begin{align} [e_{i,i}-e_{i+n,i+n}, e_{i,j}-e_{j+n,i+n}]&=e_{i,j}-e_{j+n,i+n}\\ [e_{i,i}-e_{i+n,i+n}, e_{i,j+n}-e_{j,i+n}]&=e_{i,j+n}-e_{j,i+n}\\ [e_{j,j}-e_{j+n,j+n}, e_{i,j}-e_{j+n,i+n}]&=-(e_{i,j}-e_{j+n,i+n})\\ [e_{j,j}-e_{j+n,j+n}, e_{i,j+n}-e_{j,i+n}]&=e_{i,j+n}-e_{j,i+n} \end{align} $$ with all other brackets zero. So all non-diagonal basis vectors are ad-eigenvectors of the diagonal ones, and since we've exhausted the dimension of the entire Lie algebra ($2n^2-n$), the diagonal matrices are a Cartan sub-algebra and this is a root-space decomposition.

Chad K
  • 5,058