Skip to main content

Applied Mathematics

Section 2.9 Summary of the basics of linear algebra

We finish this chapter with a summarizing theorem about linear algebra. This covers how much of all of the above concepts are related.

Example 2.9.2.

Examine TheoremΒ 2.9.1 using the matrix
\begin{equation*} A = \begin{bmatrix} 1 \amp 0 \amp 3 \\ 0 \amp 2 \amp -1 \\ 0 \amp 3 \amp -1 \end{bmatrix} \end{equation*}
Solution.
In this example, we will show all of the equivalent properties directly on the matrix \(A\) .
  1. First find the determinant. Using the Laplace expansion method and expanding down the first column
    \begin{equation*} |A| = 1 \begin{vmatrix} 2 \amp -1 \\ 3 \amp -1 \end{vmatrix} = 1 (2(-1)-(-1)3) = 1 \end{equation*}
    which is nonzero.
  2. Next, we’ll find the inverse matrix:
    \begin{align*} \qquad \amp \left[\begin{array}{rrr|rrr} 1 \amp 0 \amp 3 \amp 1 \amp 0 \amp 0\\ 0 \amp 2 \amp -1 \amp 0 \amp 1 \amp0 \\ 0 \amp 3 \amp -1 \amp 0 \amp 0 \amp 1 \end{array}\right]\\ -3 R_2 +2 R_3 \rightarrow R_3 \qquad \amp \left[\begin{array}{rrr|rrr} 1 \amp 0 \amp 3 \amp 1 \amp 0 \amp 0\\ 0 \amp 2 \amp -1 \amp 0 \amp 1 \amp0 \\ 0 \amp 0 \amp 1 \amp 0 \amp -3 \amp 2 \end{array}\right]\\ \begin{array}{r} R_3 + R_2 \rightarrow R_2, \\ -3R_3 + R_1 \rightarrow R_1 \end{array} \qquad \amp \left[\begin{array}{rrr|rrr} 1 \amp 0 \amp 0 \amp 1 \amp 9 \amp -6\\ 0 \amp 2 \amp 0 \amp 0 \amp -2 \amp2 \\ 0 \amp 0 \amp 1 \amp 0 \amp -3 \amp 2 \end{array}\right]\\ \frac{1}{2} R_2 \rightarrow R_2 \qquad \amp \left[\begin{array}{rrr|rrr} 1 \amp 0 \amp 0 \amp 1 \amp 9 \amp -6\\ 0 \amp 1 \amp 0 \amp 0 \amp -1 \amp 1 \\ 0 \amp 0 \amp 1 \amp 0 \amp -3 \amp 2 \end{array}\right] \end{align*}
    and this shows that the inverse matrix is
    \begin{equation*} A^{-1} = \begin{bmatrix} 1 \amp 9 \amp -6\\ 0 \amp -1 \amp 1 \\ 0 \amp -3 \amp 2 \end{bmatrix} \end{equation*}
  3. Since the inverse matrix exists, then a unique solution to \(A\vec{x}=\vec{b}\) can be found by \(\vec{x}=A^{-1}\vec{b}\text{.}\)
  4. Again, since \(A^{-1}\) exists, then
    \begin{equation*} \vec{x} = A^{-1}\vec{0} = \vec{0} \end{equation*}
  5. See #8 below.
  6. See #9 below.
  7. See #8 and #9 below.
  8. The column space is found by row reducing \(A^{\intercal}\text{.}\)
    \begin{align*} A^{\intercal} = \amp\begin{bmatrix} 1 \amp 0 \amp 0 \\ 0 \amp 2 \amp 3 \\ 3 \amp -1 \amp -1 \end{bmatrix} \\ -3 R_1 + R_3 \rightarrow R_3 \qquad \amp \begin{bmatrix} 1 \amp 0 \amp 0 \\ 0 \amp 2 \amp 3\\ 0 \amp -1 \amp -1 \end{bmatrix} \\ R_2 + 2 R_3 \rightarrow R_3 \qquad \amp \begin{bmatrix} 1 \amp 0 \amp 0 \\ 0 \amp 2 \amp 3\\ 0 \amp 0 \amp 1 \end{bmatrix} \end{align*}
    which is now in echelon form and since all nonzero row in a echelon form matrix are linearly independent, this shows #5.
    Continuing to put this is reduced row echelon form:
    \begin{align*} -3R_3 + R_2 \rightarrow R_2 \qquad \amp \begin{bmatrix} 1 \amp 0 \amp 0 \\ 0 \amp 2 \amp 0\\ 0 \amp 0 \amp 1 \end{bmatrix} \\ \frac{1}{2} R_2 \rightarrow R_2 \qquad \amp \begin{bmatrix} 1 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0\\ 0 \amp 0 \amp 1 \end{bmatrix} \end{align*}
    and this shows that the column space of \(A\) is
    \begin{equation*} \text{span}\left(\left\{ \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 1\\0 \end{bmatrix}, \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} \right\} \right) \end{equation*}
    which is all of \(\mathbb{R}^3\text{.}\) This also shows that the rank of \(A\) is 3.
  9. In a similar manner to #8, we put \(A\) in reduced row echelon form:
    \begin{align*} \qquad \amp \begin{bmatrix} 1 \amp 0 \amp 3 \\ 0 \amp 2 \amp -1 \\ 0 \amp 3 \amp -1 \end{bmatrix} \\ -3 R_2 +2 R_3 \rightarrow R_3 \qquad \amp \begin{bmatrix} 1 \amp 0 \amp 3 \\ 0 \amp 2 \amp -1\\ 0 \amp 0 \amp 1 \end{bmatrix} \\ \begin{array}{r} R_3 + R_2 \rightarrow R_2, \\ -3R_3 + R_1 \rightarrow R_1 \end{array} \qquad \amp \begin{bmatrix} 1 \amp 0 \amp 0 \\ 0 \amp 2 \amp 0 \\ 0 \amp 0 \amp 1 \end{bmatrix} \\ \frac{1}{2} R_2 \rightarrow R_2 \qquad \amp \begin{bmatrix} 1 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 1 \end{bmatrix} \end{align*}
    and thus the row space is
    \begin{equation*} \text{span}(\{ \begin{bmatrix} 1 \amp 0 \amp 0 \end{bmatrix},\begin{bmatrix} 0 \amp 1 \amp 0 \end{bmatrix},\begin{bmatrix} 0 \amp 0 \amp 1 \end{bmatrix} \}) \end{equation*}
    and this is \(\mathbb{R}^3\text{.}\) In addition, this shows that the rank of \(A\) is 3.
  10. The null space is the set of all \(\vec{x}\) such that \(A\vec{x}=\vec{0}\text{,}\) but from #4, we showed that the only solution to this is \(\vec{x}=\vec{0}\text{.}\) The nullity is the number of linearly independent vectors in this set which is 0, by definition.
  11. From #9, we showed that the reduced row echelon form of \(A\) is \(I\text{.}\)
This theorem will be extremely helpful in finding a certain type of scalar and vector called an eigenvalue and eigenvector. The following example shows its usefulness.

Example 2.9.4.

\begin{equation*} A = \begin{bmatrix} 1 \amp 0 \amp 2 \\ 0 \amp -2 \amp 1 \\ 2 \amp -2 \amp 5 \end{bmatrix} \end{equation*}
Show that \(A\vec{x} = \vec{0}\) has a nontrivial solution first by using TheoremΒ 2.9.3 then by directly finding solutions.
Solution.
First, find the determinant by expansion:
\begin{align*} |A| \amp = 1 \begin{vmatrix} -2 \amp 1 \\ -2 \amp 5 \end{vmatrix} + 2 \begin{vmatrix} 0 \amp -2 \\ 2 \amp -2 \end{vmatrix} \\ \amp = (-10-(-2)) + 2 (0-(-4)) = -8 + 8 = 0 \end{align*}
and therefore by Theorem TheoremΒ 2.9.3, there is a nontrivial solution to \(A \vec{x} = \vec{0}\)
Next, we’ll solve the matrix equation by Gauss’ method.
\begin{align*} \amp \qquad \left[\begin{array}{rrr|r} 1 \amp 0 \amp 2 \amp 0 \\ 0 \amp -2 \amp 1 \amp 0 \\ 2 \amp -2 \amp 5 \amp 0 \end{array}\right] \\ -2 R_1 + R_3 \rightarrow R_3 \amp \qquad \left[\begin{array}{rrr|r} 1 \amp 0 \amp 2 \amp 0 \\ 0 \amp -2 \amp 1 \amp 0 \\ 0 \amp -2 \amp 1 \amp 0 \end{array}\right] \\ -R_2 + R_3 \rightarrow R_3 \amp \qquad \left[\begin{array}{rrr|r} 1 \amp 0 \amp 2 \amp 0 \\ 0 \amp -2 \amp 1 \amp 0 \\ 0 \amp 0 \amp 0 \amp 0 \end{array}\right] \end{align*}
The resulting equations are
\begin{align*} x_1 + 2x_3 \amp = 0 \\ -2x_2 + x_3 \amp = 0 \end{align*}
\begin{align*} x_1 \amp = -2x_3 \\ x_2 \amp = \frac{1}{2} x_3 \end{align*}
and the solution set is
\begin{equation*} \left\{ \begin{bmatrix} -2 \\ 1/2 \\ 1 \end{bmatrix} x_3 \; | \; x_3 \in \mathbb{R} \right\} \end{equation*}
This shows directly that the matrix equation \(A\vec{x}=\vec{0}\) does not have a trivial solution.
Also notice that the last matrix of Gauss’ method showed that the rank was 2 (since there were only 2 nonzero rows). This was another statement in the theorem.