3

I'm trying to reconstruct a root diagram of a Lie algebra akin to the attached image. I've constructed all the root vectors but I'm struggling to see how one practically can view the roots as being in $\mathbb{R^2}$.

As I understand it, I should have $E_r$, the root vector corresponding to the root $r$ with $[h,E_r] = r(h)E_r$, but for my Lie algebra I am only getting scalars for $r(h)$, it's rather unclear how this embeds in $\mathbb{R}^2$.

Additionally, I think I also know how these $r(h)$ look as elements in $\mathfrak{h}$, the Cartan subalgebra, but I'm struggling to compute their Killing form $B(r,r)$. As I understand it, I need to look at how the $r \in \mathfrak{h}$ act adjointly on elements on $\mathfrak{g}$.

I.e. compute $[r,g]$ for a general element $g \in \mathfrak{g}$ ($\textbf{NOT}$ $\mathfrak{h}$!) to get a matrix for the adjoint action by $r$ and then square that and take its trace. Is this correct?

Ultimately, I'd like to end up with vectors as in the diagram... Could someone please explain the actual concrete process of this construction. enter image description here

Eugaurie
  • 577
  • The geometry of the roots as elements of a root system is very different from the geometry of the Lie algebra from which you "distill" the roots. For starters, your Lie algebra may be defined over any field, often $\mathbb C$, but that visualisation of the root system indeed is commonly seen in $\mathbb R^2$. In particular, it is not what you call the $E_r$ which should be identified with the roots -- they are basis vectors of the root spaces in the Lie algebra. Confer my answer to https://math.stackexchange.com/q/3312731/96384 for more discussion (with the same root system!). – Torsten Schoeneberg Feb 18 '20 at 15:48
  • In a nutshell, the root system is a finite subset of $\mathfrak{h}^$, the dual space of a Cartan subalgebra of the Lie algebra, and this finite set has some nice combinatorial properties, which in the best cases encode all the information one ever needs about the Lie algebra. To visualise this finite combinatorial object* (the root system), it is commonly viewed as sitting inside some Euclidean space $\mathbb R^n$. This Euclidean space, however, has very little to do with any vector space structure on the Lie algebra with which we started originally. – Torsten Schoeneberg Feb 18 '20 at 16:04
  • @Torsten Schoeneberg Oh so is it not really a canonical realisation of the roots, is it just that you know they interact on the Lie algebra level and you just contruct a system of vectors that fits those relations. The $E_r$ are the "root vectors" aren't they? By which I mean they aren't roots as vectors but that they're eigenvectors of the adjoint action and their "eigenvalue" in the dual space is what we're realising, right? I'm confused at how exactly those vectors were obtained in the attached question beyond trial and error... – Eugaurie Feb 18 '20 at 20:01
  • Yes, the $E_r$ are often called root vectors and indeed they are sort of the eigenvectors to the roots, which are somewhat generalised eigenvalues. Note however that so far each $E_r$ is only determined up to scalar multiple; it is a deeper theorem due to Chevalley that there are specific choices for them which are nice (cf. "Chevalley basis / structure constants"). If we fix one for each root $r$ though, they can actually play the role of coroots from the abstract theory of root systems. -- What exact question do you mean where you are unsure how to obtain those root vectors? – Torsten Schoeneberg Feb 19 '20 at 05:36
  • @Torsten Schoeneberg I'm suppose I'm unsure at how you actually reached the realisation of A_2's root system and how you calculated the killing form. Do I have to use the killing form of g itself and how do I get the element of h corresponding to my root in the dual space ( I understand that the killing form gives a isomorphism but I'm not sure how I practically use that to get it.) – Eugaurie Feb 19 '20 at 08:26
  • I think I understand what you're asking, hopefully I'll have some time soon to formulate some kind of answer to that. By the way, I made a mistake in an earlier comment: It's actually not the $E_r$ which would play the role of coroots, but certain elements $H_r$ in the chosen Cartan subalgebra; these are of the form $[E_r, E_{-r}]$ but now one really needs a specific scaling for the $E_r$. (Which shows that your refined question is indeed a good one, these things are a little more intricate than they are often presented.) – Torsten Schoeneberg Feb 20 '20 at 17:36

1 Answers1

1

After discussion in the comments, I understand the question as follows: Say we have started with the Lie algebra $\mathfrak{sl}_3(K) = \lbrace A \in M_{3\times 3}(K): tr(A)=0 \rbrace$ and have chosen as the most obvious Cartan subalgebra the one consisting of the diagonal matrices in $\mathfrak{sl}_3(K)$ i.e.

$\mathfrak{h} = \lbrace \pmatrix{a&0&0\\0&b&0\\0&0&c}: a+b+c=0 \rbrace $, as well as the roots $\pm\alpha, \pm \beta, \pm \gamma \in \mathfrak{h}^*$ given by

$\alpha(\pmatrix{a&0&0\\0&b&0\\0&0&c})= a-b$,

$\beta(\pmatrix{a&0&0\\0&b&0\\0&0&c})= b-c$,

$\gamma(\pmatrix{a&0&0\\0&b&0\\0&0&c})= a-c$.

We might also have found the corresponding root spaces $\mathfrak{g}_\alpha = \pmatrix{0&*&0\\0&0&0\\0&0&0}$ to $\alpha$, $\mathfrak{g}_{-\alpha} = \pmatrix{0&0&0\\*&0&0\\0&0&0}$ to $-\alpha$, $\mathfrak{g}_{\alpha+\beta} =\pmatrix{0&0&*\\0&0&0\\0&0&0}$ to $\alpha+\beta$ etc. (We can call $\pmatrix{0&1&0\\0&0&0\\0&0&0}$ a "root vector" to the root $\alpha$, but so is $\pmatrix{0&17&0\\0&0&0\\0&0&0}$.)

Now the question is: How do we get, from this, that the root system looks like the picture in the OP?

A shortcut is to to notice that $\gamma=\alpha+\beta$ and to use the classification of root systems, which tells us that the only root system which consists of six roots, three of which positive, one of which is the sum of the other two, is the root system $A_2$, and ten thousand people before us have checked that that root system looks as the picture in the OP.

The more rewarding way is as follows: A full description of a root system and its "geometry" needs the coroots $\check{\alpha}, \check{\beta} ...$ to the roots. Namely, the crucial relations are

$s_{\alpha}(\beta) = \beta-\check{\alpha}(\beta) \beta$ (reflection of $\beta$ at the hyperplane perpendicular to $\alpha$), and more or less equivalently

$(\ast) \qquad \check{\alpha}(\beta) \cdot \check{\beta}(\alpha) = 4 \cos^2(\theta)$ where $\theta$ is the angle between $\alpha$ and $\beta$.

With these formulae we get the "realisation" of our root system in an Euclidean space, as soon as we have those coroots. One obvious property we need is $\check{\rho}(\rho) = 2$ for all roots, but where are those coroots in the Lie algebra? They are realised as special elements of $\mathfrak{h}^{**} \simeq \mathfrak{h}$, as follows: For each root $\rho$, the space $[\mathfrak{g}_\rho, \mathfrak{g}_{-\rho}]$ is a one-dimensional subspace of $\mathfrak{h}$, and it contains a

unique element $H_{\rho} \in [\mathfrak{g}_\rho, \mathfrak{g}_{-\rho}]$ such that $\rho(H_\rho) =2$.

This element $H_\rho \in \mathfrak{h}$ is the coroot $\check{\rho}$ via the identification $\mathfrak{h}^{**} \simeq \mathfrak{h}$. (Down to earth: For each root $\sigma$, $\check{\rho}(\sigma) = \sigma(H_{\rho})$.)

In our example, we get

$[\mathfrak{g}_\alpha, \mathfrak{g}_{-\alpha}] = \lbrace \pmatrix{a&0&0\\0&-a&0\\0&0&0} : a \in K \rbrace$ and hence $H_\alpha = \pmatrix{1&0&0\\0&-1&0\\0&0&0}$, and likewise

$H_\beta= \pmatrix{0&0&0\\0&1&0\\0&0&-1}$, $H_\gamma= \pmatrix{1&0&0\\0&0&0\\0&0&-1}$, $H_{-\alpha} = \pmatrix{-1&0&0\\0&1&0\\0&0&0}$ etc.

In particular, $\check{\alpha}(\beta) = \beta(H_\alpha) = -1$ as well as $\check{\beta}(\alpha) = \alpha(H_\beta)=-1$ which together with $(\ast)$ tells us that the angle between $\alpha$ and $\beta$ has cosine $1/2$ or $-1/2$, i.e. is either $60°$ or $120°$. Checking the other combinations quickly shows that it must actually be $120°$, that $\alpha+\beta$ sits exactly "between" $\alpha$ and $\beta$ with and angle of $60°$ to each, and consequently that the root system looks as in the picture.


Added in response to comment: Be aware that $\rho(H_\rho) = 2$ is true by definition for all $\rho$, regardless of the length of $\rho$. Rather, the ratio of two root lengths is given by $$\dfrac{\lvert \lvert \beta \rvert \rvert^2}{\lvert \lvert \alpha\rvert \rvert^2} = \dfrac{\beta(H_\alpha)}{\alpha(H_\beta)}$$ (and, as is well known, can only take the values $3,2,1,\frac12, \frac13$).

[In many sources they write something like $\langle \beta, \check{\alpha} \rangle$ for $\beta(H_\alpha)= \check{\alpha}(\beta)$ but it is important to note that this $\langle, \rangle$ is not a scalar product (it's generally not even symmetric to begin with) and thus will not directly give us lengths. "True" scalar products are then usually denoted by something like $(\alpha \vert \beta)$, and that would give the above $\lvert \lvert \alpha \rvert \rvert^2 = (\alpha \vert \alpha)$.]

Note further that it is in general not true that $H_{\alpha+\beta} \stackrel{?}= H_\alpha + H_\beta$ (in fancier words and more precisely: the map $\rho \mapsto H_\rho$ is in general not a morphism, let alone isomorphism, of root systems $\Phi \rightarrow \check{\Phi}$). To see a concrete example, look at a form of type $B_2$, i.e. (compare https://math.stackexchange.com/a/3629615/96384, where the matrices are mirrored at the diagonal, and the notation for $\alpha$ and $\beta$ is switched)

$\mathfrak{so}_5(\mathbb C) := \lbrace \pmatrix{a&b&0&e&g\\ c&d&-e&0&h\\ 0&f&-a&-c&i\\ -f&0&-b&-d&j\\ -i&-j&-g&-h&0\\} : a, ..., j \in \mathbb C \rbrace$.

Exercise: There is a long root $\beta$ and a short root $\alpha$ such that a system of simple roots is given by $\beta, \beta+\alpha, \beta+2\alpha, \alpha$, and we have $$H_\beta = \pmatrix{1&0&0&0&0\\ 0&-1&0&0&0\\ 0&0&-1&0&0\\ 0&0&0&1&0\\ 0&0&0&0&0\\}, H_\alpha= \pmatrix{0&0&0&0&0\\ 0&2&0&0&0\\ 0&0&0&0&0\\ 0&0&0&-2&0\\ 0&0&0&0&0\\},$$

but $$H_{\alpha+\beta} = \pmatrix{2&0&0&0&0\\ 0&0&0&0&0\\ 0&0&-2&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0\\} \neq H_\alpha +H_\beta.$$

Note that as said $\rho(H_\rho)=2$ for all roots, so we cannot tell long from short roots with that; but $\beta(H_\alpha) = -2, \alpha(H_\beta)=-1$ which makes $\beta$ longer than $\alpha$ by a factor of $\sqrt2$, and with the method from above, we get that the angle between $\alpha$ and $\beta$ is $3\pi/4 \hat{=} 135°$, and the root system looks like this: https://commons.wikimedia.org/wiki/File:Root_system_B2.svg

  • I see that here we chose a unique coroot to associate to each root from the 1d space of possible coroots by requiring that $\rho(H_{\rho}) = 2$. Is there something similar I can do when the roots are of different lengths, since this seems to effectively force everything to be the same length. Is one way to avoid this to do it for say $\alpha$ and $\beta$, but then since $\gamma = \alpha + \beta$ just take $H_{\alpha + \beta} = H_{\alpha} + H_\beta$? – Eugaurie Mar 28 '20 at 17:26
  • @Rzmwood: See edit. – Torsten Schoeneberg Mar 29 '20 at 18:07