Skip to main content

Applied Mathematics

Section 4.3 Basis of a Vector Space

From the previous two sections, we have good feeling for what a vector space is. Basically it is a set of vectors (vectors in \(\mathbb{R}^n\text{,}\) polynomials, matrices, other functions) with addition and scalar multiplication and other properties. However, how can one write elements in the vector space.
In this section, we examine a basis of a vector space as well as a representation using that basis.

Subsection 4.3.1 The Basis and Dimension of a Vector Space

In the previous section, we saw that there some correspondence between a vector space and a set of spanning vectors. In this section, we formalize this relationship.

Definition 4.3.1.

The basis of a vector space is a tuple of vectors of the vector space that form a linearly independent set that spans the vector space.
Note: the basis will be a tuple of vectors because the order of the vectors will be important. We will denote the tuple with parentheses, \(( \boldsymbol{s}_1, \boldsymbol{s}_2, \ldots )\text{.}\)

Example 4.3.2.

We showed in ExampleΒ 2.5.3 that the set
\begin{equation*} \left\{ \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 \\1 \end{bmatrix} \right\} \end{equation*}
spans \(\mathbb{R}^2\) and since the second is not a multiple of the first, they are linearly independent. Therefore the tuple
\begin{equation*} \left( \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 \\1 \end{bmatrix} \right) \end{equation*}
form a basis for \(\mathbb{R}^2\text{.}\)

Example 4.3.3.

The tuple of vectors
\begin{equation*} \left( \begin{bmatrix} 1 \\ 1 \end{bmatrix}, \begin{bmatrix} 1 \\0 \end{bmatrix} \right) \end{equation*}
is a different basis of \(\mathbb{R}^2\) because of the order of vectors. The fact that they span \(\mathbb{R}^2\) and are linearly independent do not depend on the order.

Example 4.3.4.

There are many bases of a vector space. For example,
\begin{equation*} \left( \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\1 \end{bmatrix} \right) \end{equation*}
also spans \(\mathbb{R}^2\) and are linearly independent.

Example 4.3.5.

\begin{equation*} \left( \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 \\1 \end{bmatrix}, \begin{bmatrix} 2 \\ 3 \end{bmatrix} \right) \end{equation*}
form a basis of \(\mathbb{R}^2\text{?}\)
Solution.
These three vectors are not linearly independent. Although one can show this in general, note that
\begin{equation*} \begin{bmatrix} 2 \\ 3 \end{bmatrix} = 3 \begin{bmatrix} 1 \\ 1 \end{bmatrix} - \begin{bmatrix} 1 \\ 0 \end{bmatrix} \end{equation*}
and since they are not linearly independent, then they cannot form a basis of \(\mathbb{R}^2\text{.}\)
Although there are lots of different bases for a given subspace, there are some that are more useful than others. There is a basis for more subspaces called a standard basis.

Definition 4.3.6.

The tuple
\begin{equation*} {\cal E}_n = \left( \begin{bmatrix} 1 \\ 0 \\ 0 \\ \vdots \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \\ 0 \\ \vdots \\ 0 \end{bmatrix}, \cdots \begin{bmatrix} 0 \\ 0 \\ 0 \\ \vdots \\ 1 \end{bmatrix} \right) \end{equation*}
is called the standard basis or natural basis of \(\mathbb{R}^n\text{.}\) The vectors in the basis are called \(\boldsymbol{e}_1, \boldsymbol{e}_2, \ldots\text{.}\)

Remark 4.3.7.

The natural basis of \({\cal P}_3\) is \(( 1, x, x^2, x^3 )\text{.}\)
We saw bases of vector spaces (or subspaces) at the beginning of this course without knowing that they were vector spaces. For example, in ExampleΒ 1.4.5, we solved a linear system. It’s associated homogeneous system is
\begin{align*} x_2 + 3x_3 -9 x_4 + 11 x_5 \amp = 0, \\ 2x_3 \phantom{-9x_4} + 4x_5 \amp = 0, \\ 3x_5 \amp = 0, \end{align*}
The solution (which is a subspace of \(\mathbb{R}^5\)) can be written as
\begin{equation*} \left\{\begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \\ 0 \end{bmatrix} t + \begin{bmatrix} 0 \\ 9 \\ 0 \\ 1 \\ 0 \end{bmatrix} s \; | \; t, s \in \mathbb{R}, \right\} \end{equation*}
If we let \(\boldsymbol{v}_1\) and \(\boldsymbol{v}_2\) be the two vectors above, then \((\boldsymbol{v}_1, \boldsymbol{v}_2)\) form a basis of the solution space. Since there are only two vectors and they are not constant multiples of each other, it’s easy to see that they are linearly independent. Also because of the form of the solution set, you can also see that that span the space.

Definition 4.3.8.

In a vector space with basis \(B\text{,}\) the representation of a vector \(\boldsymbol{v}\) with respect to the basis \(B\) is the column vector of the coefficients used to express \(\boldsymbol{v}\) as a linear combination of the basis vectors:
\begin{equation*} \text{Rep}_B (\boldsymbol{v}) = \begin{bmatrix} c_1 \\ c_2 \\ \vdots \\ c_n \end{bmatrix} \end{equation*}
where \(B=( \boldsymbol{\beta}_1, \boldsymbol{\beta}_2, \ldots , \boldsymbol{\beta}_n )\) and
\begin{equation*} \boldsymbol{v} = c_1 \boldsymbol{\beta}_1 + c_2 \boldsymbol{\beta}_2 + \cdots + c_n \boldsymbol{\beta}_n \end{equation*}

Example 4.3.9.

Consider the space \({\cal P}_2\text{,}\) the space of quadratic functions. Let \(B=( 1, 1+x,1+x+x^2, )\) be a basis of \({\cal P}_2\) and \(\boldsymbol{v} = 2x+x^2\text{.}\) To find the representation, we need to find \(c_1, c_2\) and \(c_3\) such that
\begin{equation*} c_1 \cdot 1 + c_2 \cdot (1+x) + c_3 \cdot (1+x+x^2) = 2x+x^2 \end{equation*}
by equating coefficients this is same as solving the linear system:
\begin{align*} c_1 + c_2 + c_3 \amp = 0 \\ c_2 + c_3 \amp = 2 \\ c_3 \amp = 1 \end{align*}
resulting in \(c_1=-2, c_2=1, c_3=1\text{,}\) therefore
\begin{equation*} \text{Rep}_B (\boldsymbol{v}) = \begin{bmatrix} -2 \\ 1 \\ 1 \end{bmatrix} \end{equation*}
If instead the basis is given as \(D=( 2,2x,x^2 )\text{,}\) then
\begin{equation*} c_1 \cdot 2 + c_2 \cdot (2x) + c_3 \cdot (x^2) = 2x+x^2 \end{equation*}
which shows that \(c_1=0, c_2 =1, c_3 = 1\text{,}\) therefore
\begin{equation*} \text{Rep}_D (\boldsymbol{v}) = \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix} \end{equation*}

Subsubsection 4.3.1.1 Representations in the natural basis

As we saw above, finding representations in a basis requires solving another linear system. However, representations in the natural basis are simple calculations. If we used the natural basis \(E=( 1, x, x^2 )\) for the quadratic example above, then
\begin{equation*} \text{Rep}_E (\boldsymbol{v}) = \begin{bmatrix} 0 \\ 2 \\ 1 \end{bmatrix} \end{equation*}
are just the coefficients of \(x^n\) terms of the vector \(2x+x^2\text{.}\) The following example shows that the representation of a vector in \(\mathbb{R}^3\) is what we expect, itself.
Example 4.3.10.
Find the Representation of the vector
\begin{equation*} \boldsymbol{v} = \begin{bmatrix} -3 \\ 2\\ 4 \end{bmatrix} \end{equation*}
in the natural basis
\begin{equation*} {\cal E}_3 = \left( \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} \right) \end{equation*}
Solution.
We seek the vector \(\boldsymbol{c} = [c_1, c_2, c_3]^{\intercal}\) such that
\begin{equation*} c_1 \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} + c_2 \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} + c_3 \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} = \begin{bmatrix} -3 \\ 2 \\ 4 \end{bmatrix} \end{equation*}
which is just that \(c_1=-3, c_2=2,\) and \(c_3=4\) so the representation of the vector in the basis \({\cal E}_3\) is
\begin{equation*} \text{Rep}_{\cal E} (\boldsymbol{v}) = \begin{bmatrix} -3 \\ 2 \\ 4 \end{bmatrix} \end{equation*}
which is just the original vector.
The last example in this section uses matrices. The natural basis for \(\mathcal{M}_{2 \times 2}\) is
\begin{equation*} B = \left( \begin{bmatrix} 1 \amp 0 \\ 0 \amp 0 \end{bmatrix}, \begin{bmatrix} 0 \amp 1 \\ 0 \amp 0 \end{bmatrix}, \begin{bmatrix} 0 \amp 0 \\ 1 \amp 0 \end{bmatrix}, \begin{bmatrix} 0 \amp 0 \\ 0 \amp 1 \end{bmatrix} \right) \end{equation*}
Example 4.3.11.
\begin{equation*} \text{Rep}_B \left( \begin{bmatrix} 1 \amp 2 \\ 3 \amp 4 \end{bmatrix} \right) \end{equation*}
Solution.
Formally, one needs to find \(c_1, c_2, c_3\) and \(c_4\) such that
\begin{equation*} \begin{bmatrix} 1 \amp 2 \\ 3 \amp 4 \end{bmatrix} = c_1 \begin{bmatrix} 1 \amp 0 \\ 0 \amp 0 \end{bmatrix} + c_2 \begin{bmatrix} 0 \amp 1 \\ 0 \amp 0 \end{bmatrix} + c_3 \begin{bmatrix} 0 \amp 0 \\ 1 \amp 0 \end{bmatrix} + c_4 \begin{bmatrix} 0 \amp 0 \\ 0 \amp 1 \end{bmatrix} \end{equation*}
but since the nice structure of the basis \(c_1=1,c_2=2,c_3=3,\) and \(c_4=4\text{,}\) so
\begin{equation*} \text{Rep}_B \left( \begin{bmatrix} 1 \amp 2 \\ 3 \amp 4 \end{bmatrix} \right) = \begin{bmatrix} 1 \\ 2 \\ 3 \\ 4 \end{bmatrix} \end{equation*}
One can generalize to show that
\begin{equation*} \text{Rep}_B \left( \begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix} \right) = \begin{bmatrix} a \\ b \\ c \\ d \end{bmatrix} \end{equation*}
and this shows that matrices (which are vectors in the formal sense of vector spaces) can be represented by vectors by reshaping the matrix as a vector.

Subsection 4.3.2 Dimension

We have been talking about a few big topics in this chapter. One of those is the spanning set of a vector space. We noted that many different sets can span a vector space. This brought in the notation of linear independence and a basis. However for a vector space there can be many different bases.
Although we did introduce a natural basis, this works well for some spaces, like \({\cal P}_2\) and \(\mathbb{R}^3\text{,}\) however what is the natural basis for a solution of homogeneous linear system.
Perhaps if two people argue over the basis of a vector space, one thing they will agree on is the number of vectors in a basis as we will see. We noted earlier that disregarding extra vectors is generally a good thing to result in a basis, but there is a unique thing about bases and that is the number of vectors in any basis.

Definition 4.3.12.

A vector space is finite dimensional if it has a basis with only finitely-many vectors.
Because of this theorem, we define the dimension in following manner.

Definition 4.3.14.

The dimension of a finite dimensional vector space is the number of vectors in any of its bases.

Example 4.3.15.

  • The dimension of \(\mathbb{R}^n\) is \(n\text{.}\) Although there are many bases, consider \(\mathcal{E}_n\text{,}\) the natural basis, which has \(n\) elements.
  • The dimension of \({\cal P}_n\) is \(n+1\text{.}\) The natural basis of \(\mathcal{P}_n\) is \((1,x,x^2, \ldots, x^n)\) with \(n+1\) elements.
  • The dimension of \({\cal M}_{2 \times 2}\text{,}\) the vector space of all 2 by 2 matrices is 4. A natural basis for this is:
    \begin{equation*} \left( \begin{bmatrix} 1 \amp 0 \\ 0 \amp 0 \end{bmatrix}, \begin{bmatrix} 0 \amp 1 \\ 0 \amp 0 \end{bmatrix}, \begin{bmatrix} 0 \amp 0 \\ 1 \amp 0 \end{bmatrix}, \begin{bmatrix} 0 \amp 0 \\ 0 \amp 1 \end{bmatrix} \right) \end{equation*}
    and since there are 4 elements, the dimension is 4.
  • The dimension of \(\mathcal{M}_{m \times n} = mn\text{.}\) There is a natural basis that is similar to the previous example.

Subsection 4.3.3 Bases of Subspaces

There were a number of important ideas in this section, so a summary is necessary. The basis of a space or subspace is useful for writing down elements in the space. That is, if we know the basis, then we know what’s in the space. Additionally, the representation of an element are the coefficients in terms of the basis.
This means that any vector in a finite-dimensional vector space can be represented as a vector and as we will see this will be helpful in that we can use many of the nice techniques from ChapterΒ 1 and ChapterΒ 2 to help. We will start to see that since we can write any polynomial as a vector, that many operations that we do to polynomials (such as multiplication, differentiation, and integration) can be done using matrices and vectors.