Skip to main content

Applied Mathematics

Section 4.2 Eigenvalues and Eigenvectors of Linear Transformations

Recall that in section SectionΒ 4.1 that the eigenvalues and eigenvectors of a square matrix was found. We will examine how we can find eigenvalues and eigenvectors of a linear map. The definition of these are extensions of what we saw for matrices.

Definition 4.2.1.

Let \(T:V \rightarrow V\) be a linear map and \(V\) be a finite dimensional vector space. The nonzero vector \(\vec{x}\) is an eigenvector with associated eigenvalue if
\begin{equation*} T(\vec{x}) = \lambda \vec{x}. \end{equation*}
There are a few ways to find \(\vec{x}\) and \(\lambda\text{.}\) In this section, we’ll only see some examples that are relatively simple to see as well as finding the matrix representation of the map. In addition, this section only shows examples from finite dimensional vector spaces, however, in general, there is no restriction.
For the remainder of this chapter we will see examples of eigenvalues and eigenvectors of linear maps, including rotations, scales and derivative maps. We first see an example of scaling a vector in \(\mathbb{R}^2\text{.}\)

Example 4.2.2.

Find the eigenvalues and eigenvectors of the scale map \(S\) from ExampleΒ 3.4.6.
Solution.
Recall that the scale map \(S: \mathbb{R}^2 \rightarrow \mathbb{R}^2\) is given by
\begin{equation*} S(\vec{x}) = k \vec{x} \end{equation*}
To find the eigenvalues and eigenvectors of \(S\text{,}\) we seek an \(\vec{x}\) and a \(\lambda\) such that
\begin{equation*} S(\vec{x})=\lambda \vec{x} \end{equation*}
but since \(S(\vec{x})=k \vec{x}\text{,}\) then \(\lambda =k\) and any \(\vec{x}\) is an eigenvector.
Alternatively, we can write down the matrix \(A_S\) associated with the map. This was done in ExampleΒ 3.4.12 and is
\begin{equation*} A_S = \begin{bmatrix} k \amp 0 \\ 0 \amp k \end{bmatrix} \end{equation*}
The eigenvalues and eigenvectors of this was found in ExampleΒ 4.1.6 for a particular \(k\text{,}\) but generalizing that, one can see that \(\lambda=k\) will be the only eigenvector of \(A_S\) with eigenvectors \([1\;\;0]^{\intercal}\) and \([0\;\;1]^{\intercal}\text{.}\)
In this case, since there are two eigenvectors associated with \(\lambda=k\text{,}\) any linear combination of the two eigenvectors is also an eigenvector, and since \([1\;\;0]^{\intercal}\) and \([0\;\;1]^{\intercal}\) span \(\mathbb{R}^2\text{,}\) any vector in \(\mathbb{R}^2\) is an eigenvector.
The next example shows the eigenvalues of the linear map associated with a derivative.

Example 4.2.3.

The set
\begin{equation*} V = \{ ae^{x} + b e^{-x} \; | \; a,b \in \mathbb{R} \} \end{equation*}
is a subspace of all functions on \(\mathbb{R}\text{.}\) A basis for this subspace is \((e^x,e^{-x})\text{.}\) In addition, the differential operation \(D:V \rightarrow V\) is a linear transformation. What are the eigenvalues and eigenvectors of \(D\text{?}\)
Solution.
There are two ways of looking at this. Since \(e^{x} \mapsto e^x\text{,}\) this means that \(e^x\) is an eigenvector with corresponding eigenvalue 1. Similarly, since \(e^{-x} \mapsto -e^{-x}\text{,}\) this also means that \(e^{-x}\) is an eigenvector with eigenvalue \(-1\text{.}\)
Alternatively, this can be done by first finding the matrix representation of the differential operator or
\begin{equation*} A_D = \begin{bmatrix} 1 \amp 0 \\ 0 \amp -1 \end{bmatrix} \end{equation*}
Recall that in the case of diagonal matrices, the eigenvalues are on the diagonal or \(\lambda_1=1\) and \(\lambda_2=-1\text{.}\) One can also find that the corresponding eigenvectors are
\begin{align*} \vec{v}_1 \amp = \begin{bmatrix} 1 \\ 0 \end{bmatrix} \amp \vec{v}_2 \amp = \begin{bmatrix} 0 \\ 1 \end{bmatrix} \end{align*}
These two vectors can be translated back to the functional forms \(e^{x}\) and \(e^{-x} \text{,}\) the same as we found above. These two functions which are in \(V\) are functions that stay the same in the subspace \(V\) up to a scalar constant.

Example 4.2.4.

Consider the differential map \(D: \mathcal{P}_3 \rightarrow \mathcal{P}_3\) which maps cubic polynomials to other cubic polynomials.
\begin{equation*} A = \begin{bmatrix} 0 \amp 1 \amp 0 \amp 0 \\ 0 \amp 0 \amp 2 \amp 0 \\ 0 \amp 0 \amp 0 \amp 3 \\ 0 \amp 0 \amp 0 \amp 0 \end{bmatrix} \end{equation*}
Find the eigenvalues of this matrix and interpret.
Solution.
First, we find
\begin{equation*} |A-\lambda I| = \begin{vmatrix} -\lambda \amp 1 \amp 0 \amp 0 \\ 0 \amp -\lambda \amp 2 \amp 0 \\ 0 \amp 0 \amp -\lambda \amp 3 \\ 0 \amp 0 \amp 0 \amp -\lambda \end{vmatrix} = (-\lambda)^4 \end{equation*}
and setting it to zero, thus \(\lambda=0\) is the only eigenvalue. To find the eigenvectors, we find the null space of the original matrix, which after scaling the second and third row, the matrix is the reduced row echelon form:
\begin{equation*} \begin{bmatrix} 0 \amp 1 \amp 0 \amp 0 \\ 0 \amp 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 0 \amp 1 \\ 0 \amp 0 \amp 0 \amp 0 \end{bmatrix} \end{equation*}
and the solution to the null space is
\begin{equation*} \left\{ \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} s \; | \; s \in \mathbb{R} \right\} \end{equation*}
This vector is the representation of the cubic polynomial \(p(x)=c\text{,}\) a constant. Thus, the only vector that remains the same under differentiation is the constant polynomial with eigenvalue 0.
This last example of this involves matrices and the rotation of a matrix.

Example 4.2.5.

Find the eigenvalues and eigenvectors of the linear map that rotates a 2 by 2 matrix 90\(^{\circ}\) clockwise. That is \(R: \mathcal{M}_{2 \times 2} \rightarrow \mathcal{M}_{2\times 2}\) such that
\begin{equation*} \begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix} \mapsto \begin{bmatrix} c \amp a \\ d\amp b \end{bmatrix} \end{equation*}
Solution.
One can show that if we consider the vector representation in the basis:
\begin{equation*} B = \biggl( \begin{bmatrix} 1 \amp0 \\ 0 \amp 0 \end{bmatrix}, \begin{bmatrix} 0 \amp1 \\ 0 \amp 0 \end{bmatrix}, \begin{bmatrix} 0 \amp0 \\ 1 \amp 0 \end{bmatrix}, \begin{bmatrix} 0 \amp0 \\ 0 \amp 1 \end{bmatrix} \biggr) \end{equation*}
\begin{equation*} \text{Rep}_B \biggl( \begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix} \biggr) = \begin{bmatrix} a \\ b \\ c \\ d \end{bmatrix} \end{equation*}
and the map \(R\) can be represented by the matrix as
\begin{equation*} A_R = \begin{bmatrix} 0 \amp 1 \amp 0 \amp 0 \\ 0 \amp 0 \amp 0 \amp 1 \\ 1 \amp 0 \amp 0 \amp 0 \\ 0 \amp 0 \amp 1 \amp 0 \end{bmatrix} \end{equation*}
We now find the eigenvalues and eigenvectors of this. The eigenvalue-eigenvector pairs are
\begin{align*} \lambda_1 \amp = 1 \amp \vec{v}_1 \amp = \begin{bmatrix} 1 \\ 1 \\ 1 \\1 \end{bmatrix} \amp \lambda_2 \amp = -1 \amp \vec{v}_2 \amp = \begin{bmatrix} 1 \\ -1 \\ -1 \\ 1 \end{bmatrix}, \\ \lambda_3 \amp = i \amp \vec{v}_3 \amp = \begin{bmatrix} 1 \\ -i \\ i \\ -1 \end{bmatrix} \amp \lambda_4 \amp = -i \amp \vec{v}_4 \amp = \begin{bmatrix} 1 \\ i \\ -i \\ -1 \end{bmatrix} \end{align*}
To translate this back to the map that rotates the matrix, we translate each of the eigenvectors to the matrix that it represents. For example, \(\vec{v}_1\) is the matrix
\begin{equation*} \begin{bmatrix} 1 \amp 1 \\ 1 \amp 1 \end{bmatrix} \end{equation*}
and if that matrix is rotated, you get it back and the eigenvalue is 1. The second eigenvector can be written as the matrix
\begin{equation*} \begin{bmatrix} 1 \amp -1 \\ -1 \amp 1 \end{bmatrix} \end{equation*}
and if you rotate this matrix, you get the matrix
\begin{equation*} \begin{bmatrix} -1\amp 1 \\ 1 \amp -1 \end{bmatrix} \end{equation*}
which is the above matrix multiplied by the eigenvalue \(\lambda_2=-1\text{.}\) In other words:
\begin{equation*} R\left( \begin{bmatrix} 1 \amp -1 \\ -1 \amp 1 \end{bmatrix} \right) = - \begin{bmatrix} 1 \amp -1 \\ -1 \amp 1 \end{bmatrix} \end{equation*}
The other two work in a similar manner, however complex numbers are needed.