Skip to main content

Applied Mathematics

Section 2.6 Linear Independence

As we saw in ExampleΒ 2.5.3 and ExampleΒ 2.5.4 two vectors seem to span \(\mathbb{R}^2\) and a third vector does not contribute anything new (which is why there was a free variable in ExampleΒ 2.5.4). We will be able to explain this with the notion of linear independence and dependence.

Definition 2.6.1.

Let \(S\) be a set of vectors from \(\mathbb{R}^n\text{.}\) The set is linearly independent if none of the elements of \(S\) can be written in terms of the other elements of \(S\text{.}\) If it is not linearly independent, then \(S\) is said to be linearly dependent.
Specifically, if the only solution to
\begin{equation*} c_1 \vec{u}_1 + c_2 \vec{u}_2 + \cdots + c_n \vec{u}_n = \vec{0} \end{equation*}
is \(c_1=c_2 = \cdots = c_n=0\text{,}\) then the vectors, \(\vec{u}_1, \vec{u}_2, \ldots, \vec{u}_n\) are linearly independent.

Example 2.6.2.

Show that
\begin{equation*} \left\{ \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 \\ 1 \end{bmatrix} \right\} \end{equation*}
is linearly independent.
Solution.
In this case, there are only two vectors and it is fairly easy to see that the second cannot be written as a linear combination of the first (or vise versa). Therefore, this set is linearly independent.

Example 2.6.3.

Is the set of vectors
\begin{equation*} \{ \begin{bmatrix} 1 \\ 2 \end{bmatrix}, \begin{bmatrix} 2 \\ 4 \end{bmatrix} \} \end{equation*}
linearly independent or dependent?
Solution.
In this case, since the second vector is twice the first, these vectors are linearly dependent.
The example above had only two vectors in the set and two vectors are linear independent if and only if they are not constant multipliers of one another. We use the more-technical definition instead.

Example 2.6.4.

Show that
\begin{equation*} \left\{ \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 \\ 2 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} \right\} \end{equation*}
is linearly independent.
Solution.
We will show that only solution to
\begin{align*} c_1 \vec{u}_1 + c_2 \vec{u}_2 + c_3 \vec{u}_3 \amp = \vec{0} \amp \text{or}\amp\amp c_1 \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}+ c_2 \begin{bmatrix} 1 \\2 \\ 0 \end{bmatrix} + c_3 \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} \amp = \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} \end{align*}
is the trivial solution. Writing down an equation for each component leads to
\begin{align*} c_1 + c_2 \amp = 0\\ 2 c_2 \amp = 0, \\ c_3 \amp = 0. \end{align*}
The latter two equation show that \(c_2=c_3=0\) and this can be used in the first equation to show that \(c_1=0\text{,}\) thus this is the trivial solution. From the definition, this set is linearly independent.
The last example in this section will be a review of why we got the results in Example ExampleΒ 2.5.4.

Example 2.6.5.

Is the set of vectors
\begin{equation*} \left\{ \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 \\ 1 \end{bmatrix}, \begin{bmatrix} 0 \\1 \end{bmatrix} \right\} \end{equation*}
linearly independent or dependent?
Solution.
We solve a linear combination of these vectors and set them equal to the zero vector.
\begin{equation*} c_1\begin{bmatrix} 1 \\ 0 \end{bmatrix}+c_2 \begin{bmatrix} 1 \\ 1 \end{bmatrix}+c_3 \begin{bmatrix} 0 \\1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \end{equation*}
or the linear system in terms of the \(c\)’s
\begin{align*} c_1 + c_2 \amp = 0 \\ c_2 + c_3 \amp = 0 \end{align*}
and in this case, let \(c_3\) be a free variable, then the solution
\begin{align*} c_2 \amp = -c_3 \\ c_1 \amp = -c_2 = c_3 \end{align*}
Since this is not the trivial solution, then these vectors are linearly dependent. The relationship between the constants shows the dependence. That is
\begin{equation*} c_3\begin{bmatrix} 1 \\ 0 \end{bmatrix}-c_3 \begin{bmatrix} 1 \\ 1 \end{bmatrix}+c_3 \begin{bmatrix} 0 \\1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \end{equation*}
or as a simple relationship
\begin{equation*} \begin{bmatrix} 1 \\ 1 \end{bmatrix} - \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \end{bmatrix} \end{equation*}
The following theorem shows the relationship between the number of vectors and the space the vectors is in.