Skip to main content

Applied Mathematics

Section 4.1 Eigenvalues and Eigenvectors

Definition 4.1.1.

For a square matrix \(A\text{,}\) an eigenvalue \(\lambda\) with related eigenvector \(\vec{v}\) satisfy
\begin{equation*} A \vec{v} = \lambda \vec{v} \end{equation*}
as long as \(\vec{v}\) is not the zero vector.
Recall that the matrix operation \(A \vec{v}\) is a linear transformation on \(\vec{v}\text{,}\) that is, it take a vector \(\vec{v}\) to another vector. Thus the eigenvector of a matrix is a vector such that \(A\vec{v}\) results in a vector in the same direction as \(\vec{v}\text{.}\) The eigenvalue is the scalar transformation associated with this.

Example 4.1.2.

Show that \(\vec{v}=[1\;\;1]^{\intercal}\) is a eigenvector of
\begin{equation*} A =\begin{bmatrix} 3 \amp 4 \\ 2 \amp 5 \end{bmatrix} \end{equation*}
Solution.
We need to show that this satisfies \(A \vec{v} = \lambda \vec{v}\) for some scalar \(\lambda\text{.}\) Since
\begin{equation*} A \vec{v} = \begin{bmatrix} 3 \amp 4 \\ 2 \amp 5 \end{bmatrix} \begin{bmatrix} 1 \\ 1 \end{bmatrix} = \begin{bmatrix} 7 \\ 7 \end{bmatrix} = 7 \begin{bmatrix} 1 \\ 1 \end{bmatrix} \end{equation*}
and since this matrix is 7 times the vector \(\vec{v}\text{,}\) this proves that \(\vec{v}\) is an eigenvector. The scale number 7 is the eigenvalue.

Subsection 4.1.1 Finding Eigenvalues and Eigenvectors

We will first find the eigenvalues of a given matrix. Start with the eigenvector equation:
\begin{align*} A \vec{v} \amp = \lambda \vec{v} \amp \amp \text{subtract $\lambda \vec{v}$}\\ A \vec{v} - \lambda \vec{v} \amp = \vec{0}, \amp \amp \text{introduce an identity matrix}\\ A \vec{v} - \lambda I \vec{v} \amp = \vec{0}, \amp \amp \text{factor out the $\vec{v}$}\\ (A - \lambda I) \vec{v} \amp = \vec{0} \end{align*}
and we have transformed this into a homogeneous matrix equation. As stated in the definition of the eigenvector, it cannot be the zero vector, and by TheoremΒ 2.9.3, the only solution where \(\vec{v}\) is nonzero is when
\begin{equation} \det(A-\lambda I) = 0\tag{4.1.1} \end{equation}
This equation is called the characteristic equation. This equation is a polynomial of degree \(n\text{,}\) the size of the matrix.
To find an eigenvector associated with the eigenvalue, solve
\begin{equation*} (A - \lambda I) \vec{v} = \vec{0} \end{equation*}
for \(\vec{v}\text{,}\) that is find the basis of the null space of the matrix \(A - \lambda I\) .
Note: as we will see in all of the examples in this section that the null space of \(A-\lambda I\) has at least dimension 1 and that the reduced echelon form of \(A-\lambda I\) has at least one row of zeros, indicating that the rank of \(A-\lambda I\) is less than \(n\text{.}\) The reason for this is because of TheoremΒ 2.9.3. This is helpful for detecting errors in finding eigenvalues/eigenvectors.

Example 4.1.3.

Find all eigenvalues and eigenvectors of
\begin{equation*} \begin{bmatrix} 3 \amp 4 \\ 2 \amp 5 \end{bmatrix} \end{equation*}
Solution.
First, solve \(|A-\lambda I| = 0\text{,}\)
\begin{align*} |A - \lambda I| \amp = \begin{vmatrix} 3-\lambda \amp 4 \\ 2 \amp 5 - \lambda \end{vmatrix} = (3-\lambda)(5-\lambda) - 8 \\ \amp = \lambda^2 -8\lambda +7 = (\lambda-1)(\lambda-7) =0 \end{align*}
so \(\lambda = 1,7\text{.}\)
Next, we need to find an eigenvector for each eigenvalue. When \(\lambda=1\text{,}\)
\begin{equation*} (A-1 I)\vec{v} = \begin{bmatrix} 2 \amp 4 \\ 2 \amp 4 \end{bmatrix} \vec{v} = 0 \end{equation*}
To solve this matrix equation, we’ll use gaussian elimination on the augmented matrix:
\begin{align*} \amp \qquad \left[\begin{array}{rr|r} 2 \amp 4 \\ 2 \amp 4 0 \end{array}\right] \\ -R_1 + R_2 \rightarrow R_2 \amp \qquad \left[\begin{array}{rr|r} 2 \amp 4 \\ 0 \amp 0 \end{array}\right] \\ \frac{1}{2} R_1 \rightarrow R_1 \amp \qquad \left[\begin{array}{rr|r} 1 \amp 2 \\ 0 \amp 0 \end{array}\right] \end{align*}
The first row of the matrix corresponds to \(x_1+2x_2=0\) so the null space of \(A-\lambda I\) is
\begin{equation*} \left\{ \begin{bmatrix} -2 \\ 1 \end{bmatrix} s \; | \; s \in \mathbb{R} \right\} \end{equation*}
The eigenvector(s) associated with \(\lambda=1\) is the basis of this space or
\begin{equation*} \vec{v} = \begin{bmatrix} -2 \\ 1 \end{bmatrix} \end{equation*}
And to find the eigenvector associated with \(\lambda = 7\text{,}\) solve
\begin{equation*} (A - 7 I) \vec{v} = \begin{bmatrix} -4 \amp 4 \\ 2 \amp -2 \end{bmatrix} \vec{v} = \vec{0} \end{equation*}
and use Gaussian Elimination to reduce
\begin{align*} \amp \qquad \left[\begin{array}{rr|r} -4 \amp 4 \amp 0 \\ 2 \amp -2 \amp 0 \end{array}\right] \\ \begin{array}{r} -\frac{1}{4} R_1 \rightarrow R_1 \\ -2 R_1 + R_2 \rightarrow R_2 \end{array} \amp \qquad \left[\begin{array}{rr|r} 1 \amp -1 \amp 0 \\ 0 \amp 0 \amp 0 \end{array}\right] \end{align*}
The null space of \(A-7I\) is
\begin{equation*} \left\{ \begin{bmatrix} 1 \\ 1 \end{bmatrix} s \; | \; s \in \mathbb{R} \right\} \end{equation*}
so the eigenvector associated with \(\lambda=7\) is the basis of this space or
\begin{equation*} \vec{v} = \begin{bmatrix} 1 \\ 1 \end{bmatrix} \end{equation*}
Overall, there are two eigenvalues and two related eigenvectors they are
\begin{align*} \lambda_1 \amp = 1, \quad \vec{v}_1 = \begin{bmatrix} -2 \\1 \end{bmatrix} \amp \lambda_2 \amp = 7, \quad \vec{v}_2 = \begin{bmatrix} 1 \\1 \end{bmatrix} \end{align*}
This example showed that this \(2 \times 2\) matrix has two real eigenvalues and two corresponding eigenvectors. Also, although it seemed that these eigenvectors may be unique, we’ll show that \([-3\;\;-3]^{\intercal}\) is also a eigenvector of the matrix in ExampleΒ 4.1.3 .
\begin{equation*} \begin{bmatrix} 3 \amp 4 \\ 2 \amp 5 \end{bmatrix}\begin{bmatrix} -3 \\ -3 \end{bmatrix} = \begin{bmatrix} -21 \\ -21 \end{bmatrix} = 7 \begin{bmatrix} -3 \\ -3 \end{bmatrix} \end{equation*}
which shows directly that \([-3\;\;-3]^{\intercal}\) is an eigenvector with corresponding eigenvalue 7. Does this mean that there are other eigenvectors? Yes. The following lemma shows this.

Proof.

Since \(\vec{v}\) is an eigenvector of \(A\) with corresponding eigenvalue \(\lambda\text{,}\) then \(A \vec{v} = \lambda \vec{v}\text{.}\) Multiplying this by \(k\) results in
\begin{align*} k(A \vec{v}) \amp = k (\lambda \vec{v}) \\ A (k \vec{v}) \amp = \lambda (k \vec{v}) \end{align*}
which shows the result. Also note, that \(k\) cannot be zero, because \(0\vec{v}\) is the zero vector, which is not an eigenvector.
This next example shows another possible solution set for eigenvalues-eigenvectors.

Example 4.1.5.

Find the eigenvalues and eigenvectors of
\begin{equation*} \begin{bmatrix} 2 \amp -1 \\ 1 \amp 4 \end{bmatrix} \end{equation*}
Solution.
The eigenvalues are found by solving the characteristic equation
\begin{align*} |A-\lambda I| \amp = \begin{vmatrix} 2-\lambda \amp -1 \\ 1 \amp 4-\lambda \end{vmatrix} = (2-\lambda)(4-\lambda)+1 \\ \amp = \lambda^2-6\lambda + 9 \end{align*}
And solving \(|A-\lambda I|=0\) has the single root \(\lambda = 3\text{.}\) To find the associated eigenvector, we solve for the null space of \(A-3I\)
\begin{align*} A-3I = \amp \begin{bmatrix} -1 \amp -1 \\ 1 \amp 1 \end{bmatrix} \\ R_1 + R_2 \rightarrow R_2 \qquad \amp \begin{bmatrix} -1 \amp -1 \\ 0 \amp 0 \end{bmatrix} \end{align*}
which has the only equation \(-x_1 = x_2\) or \(x_1 =-x_2\) which shows that the null space is
\begin{equation*} \left\{ \begin{bmatrix} -1 \\ 1 \end{bmatrix} x_2 \; | \; x_2 \in \mathbb{R} \right\} \end{equation*}
and the eigenvector is \(\vec{v} = [-1\;\; 1]^{\intercal}\text{.}\)
And this next example has only one eigenvalue however has two linearly independent eigenvectors.

Example 4.1.6.

Find the eigenvalues and eigenvectors of
\begin{equation*} \begin{bmatrix} 2 \amp 0 \\ 0 \amp 2 \end{bmatrix} \end{equation*}
Solution.
This has the characteristic equation \((\lambda-2)^2 = 0\) or the single root \(\lambda = 2\text{.}\) To find the eigenvectors we find the null space of \(A-2I\) or
\begin{equation*} \begin{bmatrix} 0 \amp 0 \\ 0 \amp 0 \end{bmatrix} \end{equation*}
and any vector in \(\mathbb{R}^2\) is in the null space. We can write down the space using the standard basis vectors
\begin{equation*} \left\{ \begin{bmatrix} 1 \\ 0 \end{bmatrix} x_1 + \begin{bmatrix} 0 \\ 1 \end{bmatrix} x_2 \;|\; x_1, x_2 \in \mathbb{R} \right\} \end{equation*}
so \(\vec{v}_1 = [1\;\;0]^{\intercal}\) and \(\vec{v}_2=[0\; \;1]^{\intercal}\) are both eigenvectors associated with \(\lambda=2\text{.}\) In fact, since any vector in \(\mathbb{R}^2\) is in the span of these two eigenvectors, then any vector in \(\mathbb{R}^2\) is an eigenvector.
There are other possibilities of eigenvalues and eigenvectors of \(2 \times 2\) matrices. The following has complex eigenvalues.

Example 4.1.7.

Find the eigenvalues and eigenvectors of
\begin{equation*} \begin{bmatrix} 0 \amp 2 \\ -2 \amp 0 \end{bmatrix} \end{equation*}
Solution.
To find the eigenvalues, we find
\begin{equation*} \begin{vmatrix} -\lambda \amp 2 \\ -2 \amp -\lambda \end{vmatrix} = 0 \end{equation*}
or \(\lambda^2+ 4 =0\text{,}\) which has the solutions \(\lambda = \pm 2i\text{.}\)
To find the eigenvectors, we find the null space associated with each of the two eigenvalues.
  • \(\lambda = 2i\)
    Find the null space of \(A-2iI\text{:}\)
    \begin{align*} \amp \qquad \begin{bmatrix} -2i \amp 2 \\ -2 \amp -2i \end{bmatrix} \\ \frac{1}{-2i} R_1 \rightarrow R_1 \amp \qquad \begin{bmatrix} 1 \amp i \\ -2 \amp -2i \end{bmatrix} \end{align*}
    where \(\frac{-2}{2i}=\frac{-1}{i} =\frac{-i}{i^2}=\frac{-i}{-1}=i\) is used
    \begin{align*} 2R_1 + R_2 \rightarrow R_2, \amp \qquad \begin{bmatrix} 1 \amp i \\ 0 \amp 0 \end{bmatrix} \end{align*}
    and the top equation says \(x_1 = - ix_2\text{,}\) so the null space is
    \begin{equation*} \left\{ \begin{bmatrix} -i \\ 1 \end{bmatrix}x_2 \; | \; x_2 \in \mathbb{R} \right\} \end{equation*}
    so the eigenvector is \([-i \; \; 1]^{\intercal}\text{.}\)
  • \(\lambda = -2i\)
    Find the null space of \(A-2iI\text{:}\)
    \begin{align*} \amp \qquad \begin{bmatrix} 2i \amp 2 \\ -2 \amp 2i \end{bmatrix} \\ \frac{1}{2i} R_1 \rightarrow R_1 \amp \qquad \begin{bmatrix} 1 \amp -i \\ -2 \amp -2i \end{bmatrix} \\ 2R_1 + R_2 \rightarrow R_2, \amp \qquad \begin{bmatrix} 1 \amp -i \\ 0 \amp 0 \end{bmatrix} \end{align*}
    and the top equation says \(x_1 = ix_2\text{,}\) so the null space is
    \begin{equation*} \left\{ \begin{bmatrix} i \\ 1 \end{bmatrix}x_2 \; | \; x_2 \in \mathbb{R} \right\} \end{equation*}
    so the eigenvector is \([i \; \; 1]^{\intercal}\text{.}\)
Notice in the last example that there where complex conjugate pairs of eigenvalues and associated eigenvectors. This always occurs with a real matrix \(A\text{.}\) The following theorem summarizes this.

Proof.

If \(\lambda\text{,}\) \(\vec{v}\) is an eigenvalue-eigenvector pair of \(A\text{,}\) then
\begin{equation*} A\vec{v} = \lambda \vec{v} \end{equation*}
And taking the complex conjugate of this equation.
\begin{align*} \overline{A\vec{v}} \amp = \overline{\lambda \vec{v}} \\ A \overline{\vec{v}} \amp = \overline{\lambda} \, \overline{\vec{v}} \end{align*}
where \(\overline{A}=A\) because \(A\) is a real matrix. This shows that \(\overline{\lambda}\text{,}\) \(\overline{\vec{v}}\) is an eigenvalue-eigenvector pair of \(A\text{.}\)
The result of this theorem will save work if you have a real matrix and find a complex eigenvalue. ExampleΒ 4.1.7 shows that the eigenvalue-eigenvector pairs are \(\lambda_1=2i\) and \(\vec{v}_1=[-i \; \; 1]^{\intercal}\text{.}\) The above theorem shows that \(\lambda_2=\overline{\lambda_1}= -2i\) and \(\vec{v}_2=\overline{\vec{v}_1} = [i \; \; 1]^{\intercal}\) is another eigenvalue-eigenvector pair. The next example has real eigenvalue and eigenvector, but has a different feature than ExampleΒ 4.1.3.

Note 4.1.9. Eigenvalue/Eigenvectors of a \(2\times 2\) real matrix.

The follow is a list of possible eigenvalue-eigenvectors for a real \(2 \times 2\) real matrix.

Subsection 4.1.2 Eigenvalues and Eigenvectors of \(3 \times 3\) matrices

Finding the eigenvalues and eigenvectors for matrices larger than \(2 \times 2\) goes through the same steps. It is just a bit more complicated and we will show two examples of these.

Example 4.1.10.

Find the eigenvalues and eigenvectors of the 3 by 3 matrix
\begin{equation*} \begin{bmatrix} 1 \amp 0 \amp 0 \\ 3 \amp -2 \amp 6 \\ 2 \amp 1 \amp 3 \end{bmatrix} \end{equation*}
Solution.
To find the eigenvalues, we solve the characteristic equation \(|A-\lambda I|=0\) or expanding across the first row to find the determinant and we solve
\begin{align*} |A-\lambda I| \amp= \begin{vmatrix} 1-\lambda \amp 0 \amp 0 \\ 3 \amp -2-\lambda \amp 6 \\ 2 \amp 1 \amp 3-\lambda \end{vmatrix} = (1-\lambda) \begin{vmatrix} -2 - \lambda \amp 6 \\ 1 \amp 3-\lambda \end{vmatrix} \\ \amp=(1-\lambda)( (-2-\lambda)(3-\lambda) -6) =0 \end{align*}
which has roots \(\lambda=1,4,-3\text{.}\) To find the corresponding eigenvectors, we solve for the null space of \(A-\lambda I\) for each \(\lambda\)
  • \(\lambda=1\)
    \begin{align*} A-I = \amp \begin{bmatrix} 0 \amp 0 \amp 0 \\ 3 \amp -3 \amp 6 \\ 2 \amp 1 \amp 2 \end{bmatrix}\\ \begin{array}{r} R_2 \leftrightarrow R_1\\ R_3 \leftrightarrow R_2 \end{array} \qquad \amp \begin{bmatrix} 3 \amp -3 \amp 6 \\ 2 \amp 1 \amp 2\\ 0 \amp 0 \amp 0 \\ \end{bmatrix} \\ \frac{1}{3} R_1 \rightarrow R_1 \qquad \amp \begin{bmatrix} 1 \amp -1 \amp 2 \\ 2 \amp 1 \amp 2\\ 0 \amp 0 \amp 0 \\ \end{bmatrix} \\ -2 R_1 + R_2 \rightarrow R_2 \qquad \amp \begin{bmatrix} 1 \amp -1 \amp 2 \\ 0 \amp 3 \amp -2\\ 0 \amp 0 \amp 0 \\ \end{bmatrix}\\ R_2 + 3R_1 \rightarrow R_1 \qquad \amp \begin{bmatrix} 3 \amp 0 \amp 4 \\ 0 \amp 3 \amp -2\\ 0 \amp 0 \amp 0 \\ \end{bmatrix} \end{align*}
    and other that a factor of 3, this is in reduced row echelon form. The top two equations are
    \begin{align*} x_1 \amp = -\frac{4}{3} x_3 \\ x_2 \amp = \frac{2}{3} x_3 \end{align*}
    and the null space is
    \begin{equation*} \left\{\begin{bmatrix} -4/3 \\ 2/3 \\ 1 \end{bmatrix} x_3 \; | \; x_3 \right\} \end{equation*}
    And we could take \([-4/3\;\;2/3\;\; 1]^{\intercal}\) as an eigenvector, however, from LemmaΒ 4.1.4, a scalar multiple of an eigenvector is also an eigenvector, so multiplying by 3 results in \(\vec{v}_1=[-4\;\;2\;\;3]^{\intercal}\text{.}\)
  • \(\lambda=4\)
    \begin{align*} A-4I = \amp \begin{bmatrix} 3 \amp 0 \amp 0 \\ 3 \amp -6 \amp 6 \\ 2 \amp 1 \amp -1 \end{bmatrix} \\ \frac{1}{3} R_1 \rightarrow R_1 \qquad \amp \begin{bmatrix} 1 \amp 0 \amp 0 \\ 3 \amp -6 \amp 6 \\ 2 \amp 1 \amp -1 \end{bmatrix} \\ \begin{array}{r} -3R_1 + R_2 \rightarrow R_2 \\ -2R_1 + R_3 \rightarrow R_3 \end{array} \qquad \amp \begin{bmatrix} 1 \amp 0 \amp 0 \\ 0 \amp -6 \amp 6 \\ 0 \amp 1 \amp -1 \end{bmatrix} \\ -\frac{1}{6} R_2 \rightarrow R_2 \qquad \amp \begin{bmatrix} 1 \amp 0 \amp 0 \\ 0 \amp 1 \amp -1 \\ 0 \amp 1 \amp -1 \end{bmatrix} \\ -R_2 + R_3 \rightarrow R_3 \qquad \amp \begin{bmatrix} 1 \amp 0 \amp 0 \\ 0 \amp 1 \amp -1 \\ 0 \amp 0 \amp 0 \end{bmatrix} \end{align*}
    and the top two equations are \(x_1=0\) and \(x_2=x_3\) so the null space can be written
    \begin{equation*} \left\{ \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix} x_3 \; |\; x_3 \in \mathbb{R} \right\} \end{equation*}
    so the eigenvectors is \([0\;\;1\;\;1]^{\intercal}\text{.}\)
  • \(\lambda=-3\)
    \begin{align*} A+3I = \amp\begin{bmatrix} 4 \amp 0 \amp 0 \\ 3 \amp 1 \amp 6 \\ 2 \amp 1 \amp 6 \end{bmatrix}\\ \frac{1}{4} R_1 \rightarrow R_1 \qquad \amp \begin{bmatrix} 1 \amp 0 \amp 0 \\ 3 \amp 1 \amp 6 \\ 2 \amp 1 \amp 6 \end{bmatrix}\\ \begin{array}{r} -3R_1 + R_2 \rightarrow R_2 \\ -2R_1 + R_3 \rightarrow R_3 \end{array} \qquad \amp \begin{bmatrix} 1 \amp 0 \amp 0 \\ 0 \amp 1 \amp 6 \\ 0 \amp 1 \amp 6 \end{bmatrix}\\ -R_2 + R_3 \rightarrow R_3 \qquad \amp \begin{bmatrix} 1 \amp 0 \amp 0 \\ 0 \amp 1 \amp 6 \\ 0 \amp 0 \amp 0 \end{bmatrix} \end{align*}
    and the top two equations are \(x_1=0\) , \(x_2=-6x_3\) so the null space is
    \begin{equation*} \left\{ \begin{bmatrix} 0 \\ -6 \\ 1 \end{bmatrix} x_3 \; | \; x_3 \in \mathbb{R} \right\} \end{equation*}
    Therefore the eigenvector is \(\vec{v}_3=[0\;\;-6\;\;1]^{\intercal}\)
In summary, for this matrix, there are three real eigenvalues and each has a corresponding eigenvector. These are
\begin{align*} \lambda_1 \amp = 1 \amp \vec{v}_1 \amp = \begin{bmatrix} -4 \\ 2 \\ 3 \end{bmatrix} \\ \lambda_2 \amp =4 \amp \vec{v}_2 \amp = \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix} \\ \lambda_3 \amp =-3 \amp \vec{v}_3 \amp = \begin{bmatrix} 0 \\ -6 \\ 1 \end{bmatrix} \end{align*}

Example 4.1.11.

Find the eigenvalues and eigenvectors of
\begin{equation*} A = \begin{bmatrix} 1 \amp 0 \amp 2 \\ 2 \amp 0 \amp 4 \\ -1 \amp 0 \amp -2 \end{bmatrix} \end{equation*}
Solution.
First, we solve for the eigenvectors by solving \(|A-\lambda I|=0\text{,}\)
\begin{align*} |A-\lambda I| \amp = \begin{vmatrix} 1-\lambda \amp 0 \amp 2 \\ 2 \amp -\lambda \amp 4 \\ -1 \amp 0 \amp -2- \lambda \end{vmatrix} \\ \amp = -\lambda \begin{vmatrix} 1-\lambda \amp 2 \\ -1 \amp -2-\lambda \end{vmatrix} = -\lambda \bigl( (1-\lambda)(-2-\lambda)+2 \bigr) \\ \amp = -\lambda (\lambda^2 - \lambda -2 + 2) = -\lambda^3 - \lambda^2 \end{align*}
and this is 0, when \(\lambda=0\) and \(\lambda = -1\text{.}\) Next, find the eigenvectors. The eigenvectors with \(\lambda=0\) are found by finding the null space of \(A-0I\text{:}\)
\begin{align*} \amp \qquad \begin{bmatrix} 1 \amp 0 \amp 2 \\ 2 \amp 0 \amp 4 \\ -1 \amp 0 \amp -2 \end{bmatrix} \\ \begin{array}{r} -2R_1 + R_2 \rightarrow R_2 \\ R_1+R_3 \rightarrow R_3 \end{array} \amp \qquad \begin{bmatrix} 1 \amp0 \amp 2 \\ 0 \amp 0 \amp 0 \\ 0 \amp 0 \amp 0 \end{bmatrix} \end{align*}
and since the only equation is \(x_1 +2x_3=0\text{,}\) both \(x_2\) and \(x_3\) are free variables and the solution space (therefore the null space) can be written:
\begin{equation*} \left\{ \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} x_2 + \begin{bmatrix} -2 \\ 0 \\ 1 \end{bmatrix} x_3 \; | \; x_2, x_3 \in \mathbb{R} \right\} \end{equation*}
This shows that the vectors
\begin{align*} \vec{v}_1 \amp = \begin{bmatrix} 0 \\ 1\\ 0 \end{bmatrix}\amp \vec{v}_2 \amp = \begin{bmatrix} -2 \\ 0 \\ 1 \end{bmatrix} \end{align*}
are both eigenvectors associated with \(\lambda=0\text{.}\) The eigenvectors associated with \(\lambda = -1\) is found by seeking the null space of \(A-I\text{:}\)
\begin{align*} \amp \qquad \begin{bmatrix} 2 \amp 0 \amp 2 \\ 2 \amp 1 \amp 4 \\ -1 \amp 0 \amp -1 \end{bmatrix} \\ \frac{1}{2}R_1 \rightarrow R_1 \amp \qquad \begin{bmatrix} 1 \amp 0 \amp 1 \\ 2 \amp 1 \amp 4 \\ -1 \amp 0 \amp -1 \end{bmatrix} \\ \begin{array}{r} -2R_1 + R_2 \rightarrow R_2, \\ R_1 + R_3 \rightarrow R_3, \end{array} \amp \qquad \begin{bmatrix} 1 \amp 0 \amp 1 \\ 0 \amp 1 \amp 2 \\ 0 \amp 0 \amp 0 \end{bmatrix} \end{align*}
These equations are
\begin{align*} x_1 \amp = -x_3 \\ x_2 \amp = -2x_3 \end{align*}
so the solution set (and the null space) is
\begin{equation*} \left\{ \begin{bmatrix} -1 \\ -2 \\ 1 \end{bmatrix} x_3 \; \ \; x_3 \in \mathbb{R} \right\} \end{equation*}
so the eigenvector associated with \(\lambda_3 =-1\) is
\begin{equation*} \begin{bmatrix} -1 \\ -2 \\ 1 \end{bmatrix} \end{equation*}
In summary, for this matrix, there are two real eigenvalues. The first has two corresponding eigenvectors and the second has a single corresponding eigenvector. These are
\begin{align*} \lambda_1 \amp = 0 \amp \vec{v}_1 \amp = \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} \amp \vec{v}_2 \amp = \begin{bmatrix} -2 \\ 0 \\ 1 \end{bmatrix}\\ \lambda_3 \amp = -1 \amp \vec{v}_3 \amp = \begin{bmatrix} -1 \\ -2 \\ 1 \end{bmatrix} \end{align*}

Note 4.1.12. Eigenvalue/Eigenvectors of a \(3\times 3\) real matrix.

The follow is a list of possible eigenvalue-eigenvectors for a real \(3 \times 3\) real matrix.
  • 3 real eigenvalues, 3 linearly independent eigenvectors
  • 2 real eigenvalues, one with one correspoding eigenvector, the other with two corresponding eigenvectors.
  • 1 real eigenvalue, 3 linearly indpendent eigenvectors.
  • 1 real eigenvalue with corresponding real eigenvector and 2 complex eigenvalues (complex conjugates) and 2 linearly indepedent eigenvectors (complex conjugates).