Conversely, if for all , we have
for all . Taking the derivative at gives
Thus the Lie algebra of is
We have that if and only if is imaginary for all and
for all . Thus is determined by its imaginary diagonal entries and its complex entries above the diagonal. Its real dimension is thus
If is nonzero, then
so . Thus is not a complex subspace of .
For a challenge, try to show that there is no complex structure on : there is no linear map
such that
for all and
for all (for odd this is easy, but it is trickier for even).
whence . Conversely, if for all then
for all . Differentiating with respect to at gives
as required.
For the last part, we need only show that implies . But if then
so as required.
1. Problem 55 suggests that we consider the following basis of infinitesimal rotations around the axes:
In fact, these are the images of under an isomorphism . By calculation we have
These may remind you of the quaternion group, whose irreducible two-dimensional representation leads us to consider the following basis for :
We have
— we need the factors of for this, otherwise the right hand sides would be doubled.
It follows that the linear map taking to , to and to is an isomorphism of Lie algebras.
2. By the problems class (or problem 7), we know that has a basis with
We calculate that , and . It follows that satisfy the same commutation relations as
so that there is a Lie algebra isomorphism sending , , .
How might you think of this? Well, the eigenvalues of the linear map are , with and the eigenvectors. So you might look for an element of such that the eigenvalues of are , and above works; and are then the eigenvectors!
For a more conceptual approach, let , a bilinear form on . For each , is a linear map preserving this bilinear form. But it is possible to write down a basis of such that
With respect to this basis, is the bilinear form determined by and so for all . The derived map on Lie algebras is the desired isomorphism.
3. Here is a possible approach. Show that acts on the four-dimensional real vector space of Hermitian matrices by for and a Hermitian matrix. The quadratic form on is preserved by this action. It has signature ; indeed, it is positive definite on the space of matrices
and negative definite on the subspace of matrices
We obtain a map ; its derivative is the required isomorphism.
We deduce that the map is a diffeomorphism (it and its inverse are clearly smooth). Moreover, writing , , the latter space is
which is the three-sphere.
Firstly we will show that the Lie algebra of is contained in . Indeed, suppose that with
for all . Then for all ,
for all . Taking the derivative at gives
for all and taking the derivative of this at gives for all , whence .
Conversely, if is connected and , then I claim that for all . Indeed, for ,
as . So commutes with all elements of of the form . Since these generate by the connectedness assumption, we see .
For typesetting reasons I’ll write for the column vector .
Starting with :
by the multivariable chain rule. Thus acts as and a very similar calculation shows that acts as .
Finally,
so that acts as .
An alternative solution would be to compute
using the multivariate chain rule. The derivative of at -s and the derivative of is so we get that the required derivative is
which one can check agrees with the answers from before.
Remark. Another possible convention is to use rather than , which leads to slightly different formulas. This second convention is the same as if we considered elements of as row vectors, with matrices acting on the right, and defined instead
1. Let be the standard basis vector for . Then
Multiplying out, and noting that while , we get that
which is what we want (since ).
2. This is similar. We get, with ,
This simplifies to
as required.
3. Compute the action of on the basis vectors . Skipping the working, the result is
We will show that commutes with each of , and . Note that, since is a Lie algebra representation, we have
For instance, the first equation follows from . So we get
and therefore
Similarly, commutes with . Finally,
If is irreducible, then since commutes with all elements of it is a homomorphism and so is scalar by Schur’s lemma.
The representation is irreducible, and so is a scalar. To find the scalar, we just need to evaluate on a single element of ; I will use the highest weight vector . We have
so acts as the scalar on . Here we see why we might want to use instead: then it acts as .
Recall that acts as , acts as , acts as . We see that
If we apply this to a monomial of degree , we find that
We can explain this as follows: the space of homogeneous polynomial functions of degree is isomorphic to and so the calculation from the previous part applies!
We use the notation for the usual weight basis of , so has weight , for . We have the formulas and . We also abbreviate .
The weights of are . To obtain the weights of we add together all possible (unordered, possibly equal) pairs of these and get:
Thus
A highest weight vector of weight 4 is (clear as it is a symmetric product of highest weight vectors). To get a highest weight vector of weight 2 we must take a linear combination of and which is killed by . Since and we see that
is a weight vector of weight 0 killed by , so a highest weight vector of weight 0.
This time we must take all sums of unordered pairs of /distinct/ elements of . This gives
so that . A highest weight vector of weight is .
We have to add together all pairs of weights from and , giving
as the weights of . Thus the decomposition is
A highest weight vector of weight 5 is . We can now apply repeatedly (and divide out by constant factors where possible to keep the numbers small) to obtain a weight basis of the copy of in the representation, as shown in the table.
We have and so that
is a highest weight vector of weight 3. We apply repeatedly (and divide out scalars where possible) to obtain a weight basis of the copy of in the representation:
Notice that we can ‘cheat’ and obtain just the weight vectors with nonpositive weight, and then apply the symmetry sending to to obtain those of nonnegative weight.
Finally, we have , , and so that
is a highest weight vector of weight 1. Applying (or the symmetry discussed above) we see that this vector together with
is a weight basis for the copy of .
We must add together all unordered triples of (not necessarily distinct) elements of . We get that the weights of are:
so that
A highest weight vector of weight is . We have and so that
is a highest weight vector of weight 2.
The weights of are and the weights of are . Without loss of generality, . Adding these lists together, remembering multiplicity, we see that in the tensor product:
For weights , each occurs times as
; the same holds for their negatives.
Each weight occurs times; specifically, occurs as
This agrees with the weights of
and so this is the decomposition of into irreducibles.
Omitted (for now).
Let be the dual representation. If , then the matrix of with respect to the dual basis is . From this, we see that if , while and .
Moreover, if is diagonal with entries , then . Thus is a weight vector with weight . Since is killed by and , it is a highest weight vector.
It is worth thinking about how you derive the formula for the matrix of with respect to the dual basis. It is defined so that, for and , and ,
We apply this with , recalling that . Then
This implies that
which exactly says that the matrix of with respect to the dual basis is minus the transpose of the matrix of with respect to the original basis.
The weights of are with non-negative integers summing to 3. By Weyl symmetry it is enough to find the dominant weights. Since
these are those with . The only possibilities for are then and corresponding to
Applying Weyl symmetry we see that the weights are
It is left to you to draw these; for a similar picture of see 11.
The representation has a highest weight vector of weight . It therefore contains a subrepresentation isomorphic to (which will be the subrepresentation generated by ).
The weights of are (as we can write in three ways). Therefore, looking at weights, we have
where is a one-dimensional representation with a single weight, 0. Therefore is trivial and
To find the trivial representation inside , we look for a HWV of weight 0. The following works:
For a conceptual proof, if and are any representations of a Lie algebra then we can define a representation on by
for . Then we have a map
sending
where is defined by
One can check that this is a -isomorphism (if and are finite-dimensional). In the case at hand we get
where the right hand side is a representation of by the same formula defining the adjoint representation. Then
as representations of , with corresponding to the subspace of scalar matrices and in the obvious way.
An intuitive way to see the isomorphism is that is ‘row vectors of length ’, is ‘column vectors of length ’, and is ’ matrices’, and the isomorphism takes a column vector tensored with a row vector to the matrix .
It is easy to see that its weight is . Moreover, as and are highest weight vectors, so are and , and so is their tensor product.
(General lemmas: if is a highest weight vector, then for each positive root vector , so is a highest weight vector in (you should also check it is a weight vector!). If are highest weight vectors, then
for each positive root vector so is a highest weight vector (you should also check it is a weight vector!).)
Suppose that is reducible, so that there is a nonzero proper subrepresentation . Then by complete reducibility, there is another subrepresentation with . Then and both have nonzero highest weight vectors, which must be linearly independent from each other, contradicting the assumption on .
For the standard representation, the weight spaces are spanned by , , and respectively and only (or a scalar multiple of it) is a highest weight vector. So it is irreducible. Similarly for the dual representation.
For the adjoint representation, out of the nonzero weights only is a highest weight, with unique highest weight vector. We have to check there are no highest weight vectors of weight zero. Such a vector would be a nonzero element such that for all . This would imply that is scalar, but since has trace zero this is impossible.
Remark: this is not the simplest way to see that the standard representation is irreducible; indeed, the action of on is transitive, which implies irreducibility. Similarly for the dual. Can you prove that the adjoint representation is irreducible without using weights?
The weights of are while the weights of are , , . Adding everything from the first list to everything from the second, we see that has weights
The weight diagram is shown in figure 12.
Since is a weight vector of weight and is a weight vector of weight , their tensor product is a weight vector of weight . Similarly, the other terms are also weight vectors of weight . Therefore their sum is also a weight vector of weight .
We can hit it with and , using :
and
so it is a highest weight vector.
We find whence
while so
These are linearly independent, as only appears in the second while only appears in the first.
Let . Note that is completely reducible, and the possible irreducible constituents are , , and , since these are the only weights of that are dominant. Since has a highest weight vector of weight , and this is not a weight of or , must have a subrepresentation . As is the unique highest weight vector of weight , it must occur in . Similarly, since (by part 2) has a highest weight vector of weight , must have a subrepresentation . In fact, one can check that has basis
together with the two similar vectors obtained by permuting the roles of .
So we have and we want to show equality. Note that is a highest weight vector in of weight , and is a nonzero weight vector in of weight . Moreover, part (3) implies that occurs with multiplicity at least two in . We also have that occurs in with multiplicity one. All the dominant weights of are now accounted for by with the correct multiplicities. Thus, by Weyl symmetry, the weights of agree with the weights of and we have equality. We see that the multiplicities of , , are exactly two in and all other weights in occur with multiplicity one, and we have
The weight diagram of is obtained from that of (see part (1)) by removing one circle around each of , , and .
We have that
So is a weight vector, and we know that these are a basis for .
To see that they are distinct, suppose that and for nonnegative integers , etc., and that
Then
which implies that . But as , this common difference must be zero, so , and as required.
Suppose that is a highest weight vector. Since all the weights from the first part are distinct, the weight spaces are one-dimensional, so (after scaling) for some . We have
as is a highest weight vector, so . Similarly, . Thus (up to scalar) is the unique highest weight vector.
By part 2 and the previous question, has a unique highest weight vector (up to scalar), and so is irreducible. Its highest weight is the weight of , which is .