A field is a commutative ring in which every nonzero element is invertible (colloquially, a place where we can do normal arithmetic). Linear algebra takes place over fields. We will use the following examples:
the field of rational numbers
the field of real numbers
the field of complex numbers
the fields of integers modulo , for prime.
If are vector spaces over a field , then a linear map is a function such that
for all and all . If bases have been chosen for and (and they are finite dimensional), then every linear map can be written as a matrix. The linear map is invertible if it is a bijection, in which case its inverse is also a linear map. We write for the set of linear maps from to , which is also vector space over . If , we write . We write for the group of invertible linear maps from to itself ( stands for ’general linear’). If a basis of is chosen, and , then is given by the matrices over while is the group of invertible matrices.
If is a linear map, then its kernel and image are
and
A subspace of is a subset closed under addition and scalar multiplication.
If is a subspace of , then the quotient space is the set of cosets (for addition) of in . We denote its elements by
In this situation, the map sending to is a surjective linear map whose kernel is . If is a linear map, then the map taking to gives a well-defined isomorphism from to . Compare the first isomorphism theorem in group theory, and also the rank-nullity theorem
If and are two vector spaces, then their (external) direct sum is
with componentwise addition and scalar multiplication. If and are both subspaces of some common space , we say that is the internal direct sum of and if every element of can be written uniquely as for , . This is equivalent to requiring and , or to requiring that the map
sending
is an isomorphism. Often in this situation we will simply say that is the direct sum of and .
We can generalise this to more than one subspace. If are subspaces of , then we say that is their internal direct sum if every element of can be written uniquely as with for all . Equivalently, if the map
is an isomorphism.
If is a linear map from a vector space to itself, then an eigenvector of with eigenvalue is a non-zero vector such that .
The linear map is diagonalizable if there is a basis of consisting of eigenvectors of . This is equivalent to there being a basis for which the matrix of is diagonal.
For later use, we record the following theorem from linear algebra: if are linear maps that commute with each other and that are diagonalizable, then there is a basis of consisting of simultaneous eigenvectors of the . Equivalently, a basis for which the matrices of the are all diagonal.