0

Following is a statement about commutative lie algebra from the paper "Lie Algebras, Algebraic Groups and Lie Groups" by "J.S. Milne"

A Lie algebra $V$ is said to be commutative (or abelian) if $[x,y]=0 \ \forall \ x,y\in V$ . Thus, to give a commutative Lie algebra amounts to giving a finite-dimensional vector space."

I want to know why is that in a commutative Lie algebra $[x,y]=0 \ \forall \ x,y\in V$ ?.

Since the lie agebra is commutative I know that $[x,y]=[y,x] \ \forall \ x,y\in V$ and by the property of lie algebra $[x,y]=-[y,x]$. So if $[x,y]=[y,x] \implies [x,y]-[y,x]=0 \implies [x,y]+[x,y]=0 $

Why does that imply $[x,y]=0$ ? My question basically is, in a vector space $V$ is it not possible for any element $a \neq0$ to have a finite order or $a+a+...n \ times=0$ ? If no then why ?

My second question is why does being commutative directly imply that the lie algebra is a finite-dimensional vector space ?

2 Answers2

1

Your first question is: why is $[x,y] = 0$ in a commutative Lie algebra? The reason that the bracket for a commutative Lie algebra $\mathfrak{g}$ satisfies $[x,y] = 0$ for all $x,y$ is because the bracket should be thought of as an algebraic formulation of the commutator between two matrices or linear operators. Two matrices $X,Y$ commute when $$XY - YX = [X,Y] = 0.$$ Then, a matrix algebra $A$ is commutative if for all $X,Y \in A, [X,Y] = 0$. Thus, for Lie algebras, commutativity is defined by asserting that $[x,y] = 0$ for all the Lie algebra elements $x,y\in\mathfrak{g}$, where $[\cdot,\cdot]$ here is the abstract Lie bracket.

For your second question about the order of vectors: suppose I have a vector space $V$ of characteristic $0$ (which includes $\mathbb{R}$ and $\mathbb{C}$), and $v \in V$ such that $v \not = 0$. Then, $||v|| > 0$, so, for all $n \geq 1$, $||nv|| = n||v|| > 0$. Thus, $nv$ can never be $0$ when $n \geq 1$.

For your third question about commutativity and vector spaces: I believe the author made a small error here. I think what they are saying is: if you define a commutative Lie algebra, then the bracket is trivial, since it sends every pair of vectors to $0$. This means that there is no interesting Lie algebra structure to study, so it is as if you have just defined a vector space with no Lie algebraic structure. Another way to put this is: on any vector space you can define a trivial commutative Lie algebra structure, in which the bracket sends every pair of vectors to $0$. I believe their error is by saying that the vector space must be finite dimensional, which I don't believe is necessarily true, as one can define the trivial Lie bracket on any vector space.

Lastly, you considered trying to prove that $[x,y] = [y,x] \implies [x,y] = 0$, from which you wrote down $[x,y] + [x,y] = 0$. To complete the proof, you can simply write: $$[x,y] + [x,y] = 2[x,y] = 0 \implies [x,y] = 0.$$ As mentioned in the comments, this only works in fields of characteristic $\not= 2$, which includes $\mathbb{R}$ and $\mathbb{C}$.

Solarflare0
  • 1,043
  • 1
    I think your second bit assumes the vector spaces can be normed. Is that true over fields of positive characteristic? – Randall Oct 15 '21 at 23:35
  • Right I forgot to consider that - I've edited my response. Actually, it's been a while since I've studied any abstract algebra, but I'm not convinced the statement is true for vector spaces $V$ over fields of characteristic $p < \infty$. In this case I think it would be true that for any $v \in V$, $p \cdot v = 0$? – Solarflare0 Oct 15 '21 at 23:47
  • Yes, but then $p=0$, so it's okay. – Randall Oct 15 '21 at 23:51
  • I also have an issue with the norm argument. You assume that ||nv||=n||v|| but n here is not the element of the field it's just the number of times you add v. I mean it won't be right to say ||v+v||=||v||+||v||. Can you please provide a proof that does not use norm? – Siddharth Prakash Oct 15 '21 at 23:53
  • It makes little difference by the distribution axiom in a vector space. – Randall Oct 16 '21 at 00:18
1

0. On the "finite dimension": By all means, that is just a glitch. When you scroll down in the source (and many others), you will see that most of the time we only look at Lie algebras which are of finite dimension over their ground field. Milne notes that a few lines later. Maybe he originally had put the assumption of finite dimension in the definition of a Lie algebra (some sources do that for convenience), and then any abelian Lie algebra would be finite-dimensional, not because it's abelian, but because it is a Lie algebra. But now the definition of Lie algebras does not include finite dimension, and yes, being abelian or commutative does not imply anything about the dimension, so one should just delete "finite-dimensional" in that line.


1. Let's spell out some very basic linear algebra which Randall points out in comments but which seems to cause problems for OP:

Lemma. Let $V$ be any vector space over a field $K$. If $\mathrm{char}(K) \nmid n$, then $\underbrace{v + v + \dots+ v}_{n \text{ times}} = 0$ implies $v=0$.

Proof as one would normally write it: $\mathrm{char}(K) \nmid n \implies \frac{1}{n} \in K$ and hence $nv=0$ implies $v = \frac{1}{n} \cdot nv =0$.

Ultra-formal proof: In the following, $+_K$ and $\cdot_K$ mean addition and multiplication in the field $K$, while $+$ means addition in the vector space $V$ and $\cdot$ means multiplication of a scalar from $K$ with a vector from $V$. Further, $1$ means the multiplicative identity in $K$, $0_K$ means the additive identity in $K$, while $0$ means the additive identity ("zero vector") in $V$.

Now, $\mathrm{char}(K) \nmid n$ implies that $n_K := \underbrace{(1 +_K 1 +_K \dots +_K 1)}_{n \text{ times}} \in K \setminus \{0_K\}$, so it has a multiplicative inverse $n^{-1}_K \in K$, and hence $$v = 1 \cdot v = (n^{-1}_K \cdot_K n_K) \cdot v = n^{-1}_K \cdot (n_K \cdot v) \\= n^{-1}_K \cdot \left(\underbrace{(1 +_K 1 +_K \dots +_K 1)}_{n \text{ times}} \cdot v\right) \\= n^{-1}_K \cdot \underbrace{(1\cdot v + 1\cdot v + \dots +1\cdot v)}_{n \text{ times}} \\= n^{-1}_K \cdot \underbrace{(v + v + \dots+ v)}_{n \text{ times}} \\ \stackrel{(*)}= n^{-1}_K \cdot 0 = 0$$ where $(*)$ was our assumption, and each other equality is some definition or axiom of what a vector space over a field is (the fifth equal sign in particular is what Randall calls "the distribution axiom").


2. A note on characteristic $2$ and on terminology:

Besides bilinearity and the Jacobi identity, the Lie bracket in a Lie algebra $L$ satisfies one further axiom. The proper way to state this axiom is

$$\forall x \in L: [x,x] = 0 \qquad (A1)$$

Milne states it like this too. Some sources instead give the axiom

$$\forall x,y \in L: [x,y] = -[y,x] \qquad (A2)$$

As Milne notes, and please solve for yourself, it is an easy linear algebra exercise (using bilinearity of $[,]$) that $A1 \implies A2$ regardless of the characteristic of $K$. You can also find an easy proof that $A2 \implies A1$ but only if at some point you assume $\mathrm{char}(K) \neq 2$. This is the reason why $(A1)$ is preferred. Surely sources where the ground field is never anything but $\mathbb R$ or $\mathbb C$ understandably don't care much. But good sources (like, in this case, Wikipedia) point out the subtle difference.

Note that accordingly, there is a slight deviation of terminology in characteristic $2$. The consensus right now seems to be that a Lie algebra in which

$$\forall x,y \in L: [x,y] = 0 $$

is called "abelian", while a Lie algebra in which

$$\forall x,y \in L: [x,y]=[y,x]$$

is called "commutative". With this terminology, it is obvious that abelian implies commutative, and the whole point of your first question and my ultra-formal paragraph 1 was to convince you that in case $\mathrm{char}(K) \neq 2$, commutative implies abelian i.e. they are the same thing. In characteristic $2$ however, with this terminology, either of $A1$ or $A2$ implies that every Lie algebra is commutative (but not necessarily abelian). Cf. Dietrich Burde's answer here.


3. Why do we say "$x$ and $y$ commute" if $[x,y]=0$:

This is explained well in the other answer. Confer also e.g. Why do we use the commutator bracket for Lie algebra's. In standard (and historically first) examples, the Lie bracket was defined as a new operator on some associative algebra (matrices or vector fields) via $$[X,Y] := XY-YX.$$ The thing on the right hand side obviously measures "how much $X$ and $Y$ fail to commute" in the associative algebra setting, which is why it is called "the commutator". To say "the commutator is zero" is then just a different way of saying that $X$ and $Y$ commute in the original associative algebra. --- Now, not all Lie algebras "really come from" associative algebras like that (although one can try to "envelop" them with such), but the terminology is still handy and widely used.

Note that according to what's written above in 2, in all characteristics except $2$, we have that $[x,y]= 0$ is equivalent to $[x,y] =[y,x]$. Whereas in characteristic $2$, there is a difference, namely the Lie bracket can (actually, always does) "commute" even though when viewing the bracket as commutator of an associative algebra, the elements do not commute in that associative algebra. (In a way, commutativity of the commutator becomes a different concept than commutativity of the "original associative" elements, whereas in every other characteristic that is just the same.)


4. "To give an abelian Lie algebra is to give a vector space". (With all said above, it should be clear now that this is how the sentence should be written.)

You are not asking about this, but I just want to prepare you that at some point, somebody might phrase that as "The forgetful functor from the category of $K$-Lie algebras to the category of $K$-vector spaces, when restricted to the subcategory of abelian Lie algebras, gives an equivalence of categories". What this very fancy terminology means is that, if you have any $K$-vector space $V$, there is one and only one way to "make it" into an abelian Lie algebra, namely, by defining $[x,y] := 0 $ for all $x,y \in V$; and further, if you have two $K$-vector spaces and a $K$-linear map between them $f: V \rightarrow W$, then if you turn $V, W$ into abelian Lie algebras via that one and only and obvious way, then that map $f$ is now also a homomorphism of Lie algebras. (And if $f$ was an isomorphism of vector spaces, then now it is an isomorphism of Lie algebras.)

The idea behind this, again well explained in the other answer too, is that if you have an abelian Lie algebra, you cannot "know more" about it than whatever you know about it as a vector space. If you want, for an abelian Lie algebra you can just "forget" that it is a Lie algebra at all and just treat it as a vector space: you do not really lose information, because you can always "turn it back" into an abelian Lie algebra in one and only one (and obvious) way.