Skip to main content

Applied Mathematics

Section 3.2 The Span and Basis of a Subspace

In SectionΒ 2.5, we saw the span of vectors in \(\mathbb{R}^n\text{.}\) We now extend this example to the span of any subspace. In addition, the notion of a basis of the subspace is introduced.

Definition 3.2.1.

Let \(U\) be a subset of a vector space \(V\text{.}\) If \(U\) is also a vector space, then \(U\) is a subspace.

Example 3.2.2.

We showed in ExampleΒ 3.1.4 that the set of all lines in \(\mathbb{R}^2\) that pass through the origin is a vector space. Since the set is a subset of \(\mathbb{R}^2\text{,}\) it is a subspace of \(\mathbb{R}^2\) as well.

Example 3.2.3.

Show that \(\mathbb{R}^2\) is a subspace of \(\mathbb{R}^3\text{.}\)
Solution.
Since \(\mathbb{R}^2\) is itself a vector space and a subset of \(\mathbb{R}^3\text{,}\) then \(\mathbb{R}^2\) is a subspace.

Example 3.2.4.

Recall that the set
\begin{equation*} {\cal P}_2=\{ a_0 + a_1 x + a_2 x^2\; | \; a_0, a_1, a_2 \in \mathbb{R} \} \end{equation*}
is the set of all quadratic functions.
The set \({\cal P}_1 = \{a_0 + a_1 x \; | \; a_0, a_1 \in \mathbb{R} \}\) of all linear functions is itself a vector space as well as a subset of \({\cal P}_2\text{,}\) therefore \({\cal P}_1\) is a subspace of \({\cal P}_2\text{.}\)
In addition, the set \(\{ a x^2\; | \; a \in \mathbb{R}\}\) is a vector space as well as a subset of \({\cal P}_2\text{,}\) therefore it is a subspace.
The above examples show that there are many already known subspaces. There are many cases though that aren’t evident or to show it is a subspace, we would need to prove all 10 properties that it is a vector space. The next lemma, however, shows that isn’t the case.

Proof.

This means that if \(S\) is a subset of \(V\text{,}\) a vector space, to prove that \(S\) is a subspace, we only need to check if \(r_1 \vec{s}_1 + r_2 \vec{s}_2 \in S\text{.}\)
Since \(S\) is a subspace of \(V\text{,}\) properties (2)-(5) and (7)-(10) of DefinitionΒ 3.1.1 hold for \(S\text{.}\) Thus we only need to prove closure under addition and scalar multiplication.
Property 1: Because \(r_1 \vec{s}_1 + r_2 \vec{s}_2 \in S\text{,}\) let \(r_1=r_2=1\text{,}\) thus \(\vec{s}_1+\vec{s}_2 \in S\text{.}\)
Property 6: Because \(r_1 \vec{s}_1 + r_2 \vec{s}_2 \in S\text{,}\) let \(r_2=0\text{,}\) thus \(r_1 \vec{s}_1 \in S\text{.}\)

Example 3.2.6.

Show that
\begin{equation*} V = \left\{ \begin{bmatrix} v_1 \\ v_2 \end{bmatrix} \; | \; v_2 = k v_1 \right\} \end{equation*}
(that is, all vectors on a line of slope \(k\)) is a subspace of \(\mathbb{R}^2\text{.}\)
Solution.
We will use LemmaΒ 3.2.5. Let
\begin{align*} \vec{u} \amp = \begin{bmatrix} u_1 \\ u_2 \end{bmatrix} \amp \vec{v} \amp = \begin{bmatrix} v_1 \\ v_2 \end{bmatrix} \end{align*}
be elements of \(V\text{.}\) That is \(v_2 = k v_1\) and \(u_2 = ku_1\text{.}\) Then
\begin{align*} r_1 \vec{u} + r_2 \vec{v} \amp = r_1 \begin{bmatrix} u_1 \\ k u_1 \end{bmatrix} + r_2 \begin{bmatrix} v_1 \\ k v_1 \end{bmatrix} = \begin{bmatrix} r_1 u_1 + r_2 v_1 \\ r_1 k u_1 + r_2 k v_1 \end{bmatrix} \\ \amp = \begin{bmatrix} r_1 u_1 + r_2 v_1 \\ k (r_1 u_1 + r_2 v_1) \end{bmatrix} \end{align*}
which is an element of \(V\) because the second component is \(k\) times the first one. Thus \(V\) is a subspace of \(\mathbb{R}^2\text{.}\)

Example 3.2.7.

Show using LemmaΒ 3.2.5 that
\begin{equation*} V =\left\{ \begin{bmatrix} a \amp 0 \\ 0 \amp b \end{bmatrix} \; | \; a,b \in \mathbb{R} \right\} \end{equation*}
the set of all diagonal matrices is a subspace of \({\cal M}_{2 \times 2}\text{,}\) the vector space of all \(2 \times 2\) matrices.
Solution.
In this case, if we show that for any two matrices
\begin{align*} A \amp = \begin{bmatrix} a_1 \amp 0 \\ 0 \amp b_1 \end{bmatrix}, \amp B \amp = \begin{bmatrix} a_2 \amp 0 \\ 0 \amp b_2 \end{bmatrix} \end{align*}
and scalars \(r_1, r_2 \in \mathbb{R}\) that \(r_1 A + r_2 B\) is in the set.
\begin{align*} r_1 A + r_2 B \amp = r_1 \begin{bmatrix} a_1 \amp 0 \\ 0 \amp b_1 \end{bmatrix}+ r_2 \begin{bmatrix} a_2 \amp 0 \\ 0 \amp b_2 \end{bmatrix} \\ \amp = \begin{bmatrix} r_1 a_1 + r_2 a_2 \amp 0 \\ 0 \amp r_1 b_1 + r_2 b_2 \end{bmatrix} \end{align*}
which is a diagonal matrix, therefore in \(V\text{,}\) thus this is a subspace.

Proof.

We will use LemmaΒ 3.2.5 to solve this. Let both \(\vec{x}\) and \(\vec{y}\) be in the null space of \(A\text{.}\) This means that \(A\vec{x}=\vec{0}\) and \(A\vec{y}=\vec{0}\text{.}\) We need to show that \(r_1 \vec{x} + r_2 \vec{y}\) is in the null space of \(A\text{.}\)
\begin{align*} A(r_1 \vec{x} + r_2 \vec{y}) \amp = r_1 A\vec{x} + r_2 A\vec{y} \\ \amp = r_1 (0) + r_2 (0) = 0 \end{align*}
Vectors in the null space are vectors of length \(n\text{,}\) so the null space is a subset of \(\mathbb{R}^n\) and since \(r_1 \vec{x} + r_2 \vec{y}\) is in the null space of \(A\text{,}\) then the null space is a subspace of \(\mathbb{R}^n\text{.}\)
This is an important result that we will see in eigenvalues in the next chapter.

Subsection 3.2.1 The Span of a set of vectors

We saw in SectionΒ 2.5 the span of a set of vectors in \(\mathbb{R}^n\text{.}\) We now generalize this to any vector space.

Definition 3.2.9.

The span of a nonempty subset \(S=\{\vec{s}_1, \vec{s}_2, \ldots, \vec{s}_n\}\) of a vector space is the set of all linear combinations of the vectors in \(S\text{.}\) That is,
\begin{equation*} \text{span}(S) = \{ c_1 \vec{s}_1 + c_2 \vec{s}_2 + \cdots + c_n \vec{s}_n\; | \; \text{$c_1, c_2, \ldots, c_n \in \mathbb{R}, \vec{s}_1, \vec{s}_2, \ldots, \vec{s}_n \in S$} \}. \end{equation*}
To show that a subset of vectors span a subspace \(S\text{,}\) we need to show that any vector in \(S\) can be written as a linear combination of the spanning vectors.

Example 3.2.10.

Show that the set \(\{2+x,1,x+x^2\}\) spans \(\mathcal{P}_2\text{.}\)
Solution.
In this case, we need show that a general polynomial in \(\mathcal{P}_2\) can be written as a linear combination of elements of the given set. That is
\begin{equation*} c_1 (2+x) + c_2 (1) + c_3 (x+x^2) = a_0 + a_1 x + a_2 x^2 \end{equation*}
and if there is a solution for the \(c\)’s, then that shows the the set spans \(\mathcal{P}_2\text{.}\) To find the solution, use the technique of equating coefficients. Write down the coefficients for the constant terms, \(x\) terms and \(x^2\) terms respectively.
\begin{align*} 2 c_1 + c_2 \amp = a_0 \\ c_1 + c_3 \amp = a_1 \\ c_3 \amp = a_2 \end{align*}
This has a solution \(c_3=a_2, c_1 = a_1-a_2\) and \(c_2 = a_0 - 2(a_1-a_2)\text{,}\) which means that a linear combination of the three β€œvectors” can form any quadratic function, thus the given set spans \(\mathcal{P}_2\text{.}\)

Proof.

Let \(S\) be the subset and \(\vec{s}_1, \vec{s}_2, \ldots, \vec{s}_n\) be the elements of \(S\text{.}\) Using Lemma LemmaΒ 3.2.5, we need to check that \(\text{span}(S)\) is closed under linear combinations. Let
\begin{align*} \vec{v} \amp = c_1 \vec{s}_1 + c_2 \vec{s}_2 + \cdots + c_n \vec{s}_n, \\ \vec{w} \amp = k_1 \vec{s}_1 + k_2 \vec{s}_2 + \cdots + k_n \vec{s}_n \end{align*}
\begin{align*} r_1 \vec{v} + r_2 \vec{w} \amp = r_1 (c_1 \vec{s}_1 + c_2 \vec{s}_2 + \cdots + c_n \vec{s}_n) + r_2 (k_1 \vec{s}_1 + k_2 \vec{s}_2 + \cdots + k_n \vec{s}_n) \\ \amp = (r_1 c_1 + r_2 k_1) \vec{s}_1 + (r_1c_2 + r_2 k_2) \vec{s}_2 + \cdots + (r_1 c_n + r_2 k_n) \vec{s}_n \end{align*}
Since this shows that \(r_1 \vec{v} + r_2 \vec{w}\) is in \(S\text{,}\) then \(S\) is a subspace.
This lemma allows us to talk about a vector space in terms of the vectors that span it. For example, instead of thinking of \(\mathcal{P}_2\text{,}\)we think of the span of \(\{2+x,1,x+x^2\}\) (in this case, it may not be more helpful, but other cases it is).

Example 3.2.12.

Show that the following vectors span \(\mathbb{R}^3\text{:}\)
\begin{align*} \vec{e}_1 \amp = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}, \amp \vec{e}_2 \amp = \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}, \amp \vec{e}_3 \amp = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} \end{align*}
Solution.
Because the vector \(\vec{x}=[x_1,x_2,x_3]^T\) can be written \(\vec{x}=x_1 \vec{e}_1 + x_2 \vec{e}_2 + x_3 \vec{e}_3\text{,}\) then these vectors span \(\mathbb{R}^3\text{.}\)

Example 3.2.13.

Does \(\{2+x,x^2 \}\) span \(\mathcal{P}_2\text{?}\)
Solution.
To determine this, we will try write a general polynomial in \(\mathcal{P}_2\text{,}\)
\begin{equation*} a_0 + a_1 x + a_2 x^2 \end{equation*}
as a linear combination of the set of vectors or
\begin{equation*} a_0 + a_1 x + a_2 x^2 = c_1 (2+x) + c_2 x^2 \end{equation*}
and equating coefficients,
\begin{align*} a_0 \amp = 2c_1 \\ a_1 \amp = c_1 \\ a_2 \amp = c_2 \end{align*}
There’s no solution to this because \(c_1\) can’t simultaneously equal \(a_1\) and \(a_0/2\text{,}\) so \(\{2+x,x^2\}\) does not span \(\mathcal{P}_2\text{.}\)

Subsection 3.2.2 The Basis and Dimension of a Vector Space

In the previous section, we saw that there some correspondence between a vector space and a set of spanning vectors. In this section, we formalize this relationship.

Definition 3.2.14.

The basis of a vector space is a tuple of vectors of the vector space that form a linearly independent set that spans the vector space.
Note: the basis will be a tuple of vectors because the order of the vectors will be important. We will denote the tuple with parentheses, \(( \vec{s}_1, \vec{s}_2, \ldots )\text{.}\)

Example 3.2.15.

We showed in ExampleΒ 2.5.3 that the set
\begin{equation*} \left\{ \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 \\1 \end{bmatrix} \right\} \end{equation*}
spans \(\mathbb{R}^2\) and since the second is not a multiple of the first, they are linearly independent. Therefore the tuple
\begin{equation*} \left( \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 \\1 \end{bmatrix} \right) \end{equation*}
form a basis for \(\mathbb{R}^2\text{.}\)

Example 3.2.16.

The tuple of vectors
\begin{equation*} \left( \begin{bmatrix} 1 \\ 1 \end{bmatrix}, \begin{bmatrix} 1 \\0 \end{bmatrix} \right) \end{equation*}
is a different basis of \(\mathbb{R}^2\) because of the order of vectors. The fact that they span \(\mathbb{R}^2\) and are linearly independent do not depend on the order.

Example 3.2.17.

There are many bases of a vector space. For example,
\begin{equation*} \left( \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\1 \end{bmatrix} \right) \end{equation*}
also spans \(\mathbb{R}^2\) and are linearly independent.

Example 3.2.18.

\begin{equation*} \left( \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 \\1 \end{bmatrix}, \begin{bmatrix} 2 \\ 3 \end{bmatrix} \right) \end{equation*}
form a basis of \(\mathbb{R}^2\text{?}\)
Solution.
These three vectors are not linearly independent. Although one can show this in general, note that
\begin{equation*} \begin{bmatrix} 2 \\ 3 \end{bmatrix} = 3 \begin{bmatrix} 1 \\ 1 \end{bmatrix} - \begin{bmatrix} 1 \\ 0 \end{bmatrix} \end{equation*}
and since they are not linearly independent, then they cannot form a basis of \(\mathbb{R}^2\text{.}\)
Although there are lots of different bases for a given subspace, there are some that are more useful than others. There is a basis for more subspaces called a standard basis.

Definition 3.2.19.

The tuple
\begin{equation*} {\cal E}_n = \left( \begin{bmatrix} 1 \\ 0 \\ 0 \\ \vdots \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \\ 0 \\ \vdots \\ 0 \end{bmatrix}, \cdots \begin{bmatrix} 0 \\ 0 \\ 0 \\ \vdots \\ 1 \end{bmatrix} \right) \end{equation*}
is called the standard basis or natural basis of \(\mathbb{R}^n\text{.}\) The vectors in the basis are called \(\vec{e}_1, \vec{e}_2, \ldots\text{.}\)

Remark 3.2.20.

The natural basis of \({\cal P}_3\) is \(( 1, x, x^2, x^3 )\text{.}\)
We saw bases of vector spaces (or subspaces) at the beginning of this course without knowing that they were vector spaces. For example, in ExampleΒ 1.4.5, we solved a linear system. It’s associated homogeneous system is
\begin{align*} x_2 + 3x_3 -9 x_4 + 11 x_5 \amp = 0, \\ 2x_3 \phantom{-9x_4} + 4x_5 \amp = 0, \\ 3x_5 \amp = 0, \end{align*}
The solution (which is a subspace of \(\mathbb{R}^5\)) can be written as
\begin{equation*} \left\{\begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \\ 0 \end{bmatrix} t + \begin{bmatrix} 0 \\ 9 \\ 0 \\ 1 \\ 0 \end{bmatrix} s \; | \; t, s \in \mathbb{R}, \right\} \end{equation*}
The two vectors in the solution are a basis of the solution space. Since there are only two vectors and they are not constant multiples of each other, it’s easy to see that they are linearly independent. Also because of the form of the solution set, you can also see that that span the space.

Definition 3.2.21.

In a vector space with basis \(B\text{,}\) the representation of a vector \(\vec{v}\) with respect to the basis \(B\) is the column vector of the coefficients used to express \(\vec{v}\) as a linear combination of the basis vectors:
\begin{equation*} \text{Rep}_B (\vec{v}) = \begin{bmatrix} c_1 \\ c_2 \\ \vdots \\ c_n \end{bmatrix} \end{equation*}
where \(B=( \vec{\beta}_1, \vec{\beta}_2, \ldots , \vec{\beta}_n )\) and
\begin{equation*} \vec{v} = c_1 \vec{\beta}_1 + c_2 \vec{\beta}_2 + \cdots + c_n \vec{\beta}_n \end{equation*}

Example 3.2.22.

Consider the space \({\cal P}_2\text{,}\) the space of quadratic functions. Let \(B=( 1, 1+x,1+x+x^2, )\) be a basis of \({\cal P}_2\) and \(\vec{v} = 2x+x^2\text{.}\) To find the representation, we need to find \(c_1, c_2\) and \(c_3\) such that
\begin{equation*} c_1 \cdot 1 + c_2 \cdot (1+x) + c_3 \cdot (1+x+x^2) = 2x+x^2 \end{equation*}
by equating coefficients this is same as solving the linear system:
\begin{align*} c_1 + c_2 + c_3 \amp = 0 \\ c_2 + c_3 \amp = 2 \\ c_3 \amp = 1 \end{align*}
resulting in \(c_1=-2, c_2=1, c_3=1\text{,}\) therefore
\begin{equation*} \text{Rep}_B (\vec{v}) = \begin{bmatrix} -2 \\ 1 \\ 1 \end{bmatrix} \end{equation*}
If instead the basis is given as \(D=( 2,2x,x^2 )\text{,}\) then
\begin{equation*} c_1 \cdot 2 + c_2 \cdot (2x) + c_3 \cdot (x^2) = 2x+x^2 \end{equation*}
which shows that \(c_1=0, c_2 =1, c_3 = 1\text{,}\) therefore
\begin{equation*} \text{Rep}_D (\vec{v}) = \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix} \end{equation*}

Subsubsection 3.2.2.1 Representations in the natural basis

As we saw above, finding representations in a basis requires solving another linear system. However, representations in the natural basis are simple calculations. If we used the natural basis \(E=( 1, x, x^2 )\) for the quadratic example above, then
\begin{equation*} \text{Rep}_E (\vec{v}) = \begin{bmatrix} 0 \\ 2 \\ 1 \end{bmatrix} \end{equation*}
are just the coefficients of \(x^n\) terms of the vector \(2x+x^2\text{.}\) The following example shows that the representation of a vector in \(\mathbb{R}^3\) is what we expect, itself.
Example 3.2.23.
Find the Representation of the vector
\begin{equation*} \vec{v} = \begin{bmatrix} -3 \\ 2\\ 4 \end{bmatrix} \end{equation*}
in the natural basis
\begin{equation*} {\cal E}_3 = \left( \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} \right) \end{equation*}
Solution.
We seek the vector \(\vec{c} = [c_1, c_2, c_3]^{\intercal}\) such that
\begin{equation*} c_1 \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} + c_2 \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} + c_3 \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} = \begin{bmatrix} -3 \\ 2 \\ 4 \end{bmatrix} \end{equation*}
which is just that \(c_1=-3, c_2=2,\) and \(c_3=4\) so the representation of the vector in the basis \({\cal E}_3\) is
\begin{equation*} \text{Rep}_{\cal E} (\vec{v}) = \begin{bmatrix} -3 \\ 2 \\ 4 \end{bmatrix} \end{equation*}
which is just the original vector.
The last example in this section uses matrices. The natural basis for \(\mathcal{M}_{2 \times 2}\) is
\begin{equation*} B = \left( \begin{bmatrix} 1 \amp 0 \\ 0 \amp 0 \end{bmatrix}, \begin{bmatrix} 0 \amp 1 \\ 0 \amp 0 \end{bmatrix}, \begin{bmatrix} 0 \amp 0 \\ 1 \amp 0 \end{bmatrix}, \begin{bmatrix} 0 \amp 0 \\ 0 \amp 1 \end{bmatrix} \right) \end{equation*}
Example 3.2.24.
\begin{equation*} \text{Rep}_B \biggl( \begin{bmatrix} 1 \amp 2 \\ 3 \amp 4 \end{bmatrix} \biggr) \end{equation*}
Solution.
Formally, one needs to find \(c_1, c_2, c_3\) and \(c_4\) such that
\begin{equation*} \begin{bmatrix} 1 \amp 2 \\ 3 \amp 4 \end{bmatrix} = c_1 \begin{bmatrix} 1 \amp 0 \\ 0 \amp 0 \end{bmatrix} + c_2 \begin{bmatrix} 0 \amp 1 \\ 0 \amp 0 \end{bmatrix} + c_3 \begin{bmatrix} 0 \amp 0 \\ 1 \amp 0 \end{bmatrix} + c_4 \begin{bmatrix} 0 \amp 0 \\ 0 \amp 1 \end{bmatrix} \end{equation*}
but since the nice structure of the basis \(c_1=1,c_2=2,c_3=3,\) and \(c_4=4\text{,}\) so
\begin{equation*} \text{Rep}_B \biggl( \begin{bmatrix} 1 \amp 2 \\ 3 \amp 4 \end{bmatrix} \biggr) = \begin{bmatrix} 1 \\ 2 \\ 3 \\ 4 \end{bmatrix} \end{equation*}
One can generalize to show that
\begin{equation*} \text{Rep}_B \left( \begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix} \right) = \begin{bmatrix} a \\ b \\ c \\ d \end{bmatrix} \end{equation*}
and this shows that matrices (which are vectors in the formal sense of vector spaces) can be represented by vectors by reshaping the matrix as a vector.

Subsection 3.2.3 Dimension

We have been talking about a few big topics in this chapter. One of those is the spanning set of a vector space. We noted that many different sets can span a vector space. This brought in the notation of linear independence and a basis. However for a vector space there can be many different bases.
Although we did introduce a natural basis, this works well for some spaces, like \({\cal P}_2\) and \(\mathbb{R}^3\text{,}\) however what is the natural basis for a solution of homogeneous linear system.
Perhaps if two people argue over the basis of a vector space, one thing they will agree on is the number of vectors in a basis as we will see. We noted earlier that disregarding extra vectors is generally a good thing to result in a basis, but there is a unique thing about bases and that is the number of vectors in any basis.

Definition 3.2.25.

A vector space is finite dimensional if it has a basis with only finitely-many vectors.
Because of this theorem, we define the dimension in following manner.

Definition 3.2.27.

The dimension of a finite dimensional vector space is the number of vectors in any of its bases.

Example 3.2.28.

  • The dimension of \(\mathbb{R}^n\) is \(n\text{.}\) Although there are many bases, consider \(\mathcal{E}_n\text{,}\) the natural basis, which has \(n\) elements.
  • The dimension of \({\cal P}_n\) is \(n+1\text{.}\) The natural basis of \(\mathcal{P}_n\) is \((1,x,x^2, \ldots, x^n)\) with \(n+1\) elements.
  • The dimension of \({\cal M}_{2 \times 2}\text{,}\) the vector space of all 2 by 2 matrices is 4. A natural basis for this is:
    \begin{equation*} \biggl( \begin{bmatrix} 1 \amp 0 \\ 0 \amp 0 \end{bmatrix}, \begin{bmatrix} 0 \amp 1 \\ 0 \amp 0 \end{bmatrix}, \begin{bmatrix} 0 \amp 0 \\ 1 \amp 0 \end{bmatrix}, \begin{bmatrix} 0 \amp 0 \\ 0 \amp 1 \end{bmatrix} \biggr) \end{equation*}
    and since there are 4 elements, the dimension is 4.
  • The dimension of \({\cal M}_{m \times n} = mn\text{.}\)

Subsection 3.2.4 Bases of Subspaces

There were a number of important ideas in this section, so a summary is necessary. The basis of a space or subspace is useful for writing down elements in the space. That is, if we know the basis, then we know what’s in the space. Additionally, the representation of an element are the coefficients in terms of the basis.
This means that any vector in a finite-dimensional vector space can be represented as a vector and as we will see this will be helpful in that we can use many of the nice techniques from ChapterΒ 1 to help. We will start to see that since we can write any polynomial as a vector, that many operations that we do to polynomials (such as multiplication, differentiation, and integration) can be done using matrices and vectors.