We fix the following ’standard’ basis of :
and | ||||
These satisfy the following commutation relations, which are fundamental (check them!):
and | ||||
We decompose representations of into their eigenspaces for the action of . The elements and will then move vectors between these eigenspaces, and this will let us analyze the representation theory of .
Since is simply connected, we have
Every finite-dimensional representation of is the derivative of a unique representation of .
Note that we have not proved this. However, we will use the result freely in what follows. It is possible to give purely algebraic proofs of all the results for which we use the previous proposition, but it is more complicated.
Let be a finite-dimensional complex-linear representation of . Then is diagonalizable with integer eigenvalues.
By the previous proposition is the derivative of a representation of . We can identify as a subgroup of by the following map:
By Maschke’s Theorem for , on can be diagonalized. Taking the derivative, we see that can be diagonalized and hence so can .
In fact, the classification of irreducible representations of shows that has eigenvalues in and so has eigenvalues in . ∎
The proof of the proposition is a instance of Weyl’s unitary trick. We turned the action of , which infinitesimally generates a non-compact one-parameter subgroup of , into the action of the compact group infinitesimally generated by . The action of this compact subgroup can be diagonalized.
The proposition does not hold for an arbitrary representation of the one-dimensional Lie algebra generated by . Namely, the map cannot be diagonalized. It is implicitly the interaction of with the other generators and which makes the proposition work.
Let be a finite-dimensional complex-linear representation of . By Proposition LABEL:H-diag we get a decomposition
where each is the eigenspace for with eigenvalue :
Each occurring in equation (LABEL:weight-dec) is called a weight (more precisely, an -weight) for the representation .
Each is called a weight space for .
The nonzero vectors in are called weight vectors for .
The set of weights of the zero representation is empty, while the trivial representation has a single weight, .
Let be the standard representation. Write for the standard basis. Then and . Thus the set of weights of is .
We consider the adjoint representation of on itself. By the commutation relations, we see directly that has eigenvalues (), (), and (), so the set of weights is
The non-zero weights and are called the roots of and their weight spaces are the root spaces and . The weight vectors are called root vectors.
Thus we have the root space decomposition
We consider where is the standard representation. Then
and similarly
so that the weights are . Note that this is a multiset — a set with repeated elements — and we say that the weight 0 has ‘multiplicity two’ (in general, the multiplicity of a weight is the dimension of the weight space).
Take . A set of basis vectors is
We calculate
Thus the weights are (writing ):
We will soon see an explanation for this pattern.
The following is our first version of the fundamental weight calculation.
Let be a complex-linear representation of . Let be a weight of and let . Then
and
Thus we have three maps:
We have, for ,
So as required.
The claim about the action of is proved similarly. ∎
A vector is a highest weight vector if it is a weight vector and if
In this case we call the weight of a highest weight.
Any finite-dimensional complex linear representation of has a highest weight vector.
Indeed, let be the numerically greatest weight of (there must be one, as is finite-dimensional) and let be a weight vector of weight . Then has weight by the fundamental weight calculation, so must be zero as was maximal. ∎
Let . Then the highest weight vectors are and .
These are easily checked to be highest weight vectors — the first is killed by since , the second becomes
It is left to you to check that there are no further highest weight vectors.