Skip to main content

Applied Mathematics

Section 1.6 Linear Geometry of \(n\)-space

Subsection 1.6.1 Scalar Multiplication of Vectors

In the SectionΒ 1.4, we saw the scalar multiplication of a vector. In \(\mathbb{R}^2\text{,}\) this means
\begin{equation*} r \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} r x_1 \\ r x_2 \end{bmatrix} \end{equation*}
in that each component of the vector is multiplied by the scalar \(r\text{.}\) Geometrically, multiplication by the scalar \(r\text{,}\) scales the length of the vector by a factor of \(r\) and flips its direction if \(r \lt 0\text{.}\)
Figure 1.6.1. A plot showing multiplication of vectors

Subsection 1.6.2 Vector Addition

As we saw in the previous chapter, vector addition in \(\mathbb{R}^2\) is
\begin{equation*} \vec{u} + \vec{v} = \begin{bmatrix} u_1 \\ u_2 \end{bmatrix} + \begin{bmatrix} v_1 \\ v_2 \end{bmatrix} = \begin{bmatrix} u_1 + v_1 \\ u_2 + v_2 \end{bmatrix} \end{equation*}
Consider two vectors \(\vec{u}\) and \(\vec{v}\) in the plane:
Figure 1.6.2. A plot of two vectors in the plane
The sum of these vectors can be represented by taking the \(\vec{u}\) vector, then drawing the \(\vec{v}\) at the tail end of the \(\vec{u}\) vector. The resulting vector
Figure 1.6.3.
starts at the beginning of \(\vec{u}\) and ends at the end of \(\vec{v}\) as seen above.
Another way to think about this is to use \(\vec{u}\) and \(\vec{v}\) as the sides of the parallelogram. The vector \(\vec{u}+\vec{v}\) is diagonal from the starting point of both \(\vec{u}\) and \(\vec{v}\) extending to the ending point of both.
Figure 1.6.4. The sum of two vectors are the sides of a paralellogram

Subsection 1.6.3 Geometry of Addition and Scalar Multiplication in \(\mathbb{R}^n\)

As we saw in the previous section, we know how to add and scalar multiply vectors in \(\mathbb{R}^n\text{.}\) The geometry of these operations are similar to that in \(\mathbb{R}^2\text{.}\) For example, in \(\mathbb{R}^3\text{,}\) a vector connects two points in 3-dimensional space. Scalar multiplication results in scaling that vector by a factor of \(r\text{.}\) Addition works the same way: make the ending point of the first vector, the starting point of the second vector. The result is the vector from the starting point of the first vector to the ending point of the 2nd vector.
And this extends to any dimension, \(\mathbb{R}^n\text{.}\) Although this is difficult to visualize, it still works the same way. Typically there is no need to draw any vectors in dimensions above 3.

Subsection 1.6.4 Lines in Vector form in \(\mathbb{R}^2\) and \(\mathbb{R}^3\)

First, let’s look at a line in \(\mathbb{R}^2\) and for example, passes through \((2,1)\) and \((3,4)\) and denote the line \(L\text{.}\) This would look like:
Figure 1.6.5. A vector version of a line in \(\mathbb{R}^2\)
Let’s make the vector \(\vec{v}\) the vector between the points \((2,1)\) and \((3,4)\) as shown on the figure above. Recall that a vector and a point is synonymous if the vector starts at the origin. Call \(\vec{u}\) the vector from \((0,0)\) to \((2,1)\text{.}\)
Next, any point on the line can be written as a vector by addition of \(\vec{u}\) and a scale of \(\vec{v}\) or \(\vec{u}+t\vec{v}\text{.}\) Thus the line between these two points can be written as
\begin{equation*} \left\{ \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 2 \\ 1 \end{bmatrix} + t \begin{bmatrix} 1 \\ 3 \end{bmatrix} \; | \; t \in \mathbb{R} \right\} \end{equation*}
This notion extend easily to \(\mathbb{R}^3\text{.}\) The set of points
\begin{equation*} \left\{ \begin{bmatrix} 2 \\ 1 \\ 3 \end{bmatrix} + t \begin{bmatrix} 1 \\ 0 \\ 2 \end{bmatrix} \; | \; t \in \mathbb{R} \right\} \end{equation*}
Figure 1.6.6.

Subsection 1.6.5 Planes in \(\mathbb{R}^3\)

A plane in \(\mathbb{R}^3\) can also be written using vectors although perhaps harder to visualize.

Definition 1.6.7. Plane.

A plane in \(\mathbb{R}^3\) is the set of points
\begin{equation*} \{ \vec{p} + \vec{u} t + \vec{v} s \; | t,s \in \mathbb{R} \} \end{equation*}
for nonzero vectors \(\vec{u}\) and \(\vec{v}\) and \(\vec{p},\vec{u}\) and \(\vec{v}\) are vectors in \(\mathbb{R}^3\text{.}\)
Consider the following example in which a parallelogram is drawn in the plane. The point \((2,3,1)\) in one corner of the parallelogram and the two sides are
\begin{align*} \vec{v} \amp = \begin{bmatrix} 1 \\ 1 \\1 \end{bmatrix} \amp \vec{w} \amp = \begin{bmatrix} 2 \\ 0 \\ -1 \end{bmatrix} \end{align*}
The parallelogram in \(\mathbb{R}^3\) would look like:
Figure 1.6.8.
The set of all points on the plane can then be written as
\begin{equation*} \left\{ \begin{bmatrix} 2 \\ 3 \\ 1 \end{bmatrix} + \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} t + \begin{bmatrix} 2 \\ 0 \\ -1 \end{bmatrix} s \; | \; t,s \in \mathbb{R} \right\} \end{equation*}
Extending this idea to linear surfaces in \(\mathbb{R}^n\) is a natural generalization of planes.

Definition 1.6.9.

A \(k\)-dimensional linear surface in \(\mathbb{R}^n\) is the set:
\begin{equation*} \{ \vec{p} + t_1 \vec{v}_1 + t_2 \vec{v}_2 + \cdots + t_k \vec{v}_k \; | \; t_1, t_2, \ldots, t_k \in \mathbb{R} \} \end{equation*}
If \(k=n-1\text{,}\) then the surface is called a hyperplane.

Subsection 1.6.6 Geometry of Linear Systems

You should have noticed the the \(k\)-dimensional linear surface above has the same form as the solution to the general linear system.
  • If the linear system has one free variable, the solution is a line.
  • If the linear system has two free variables, the solution is a plane (or hyperplane).

Subsection 1.6.7 Length and Angle Measures

Two of the basic ideas of geometry are the notions of length and angles. These are well-defined in \(\mathbb{R}^2\) and with a relatively easy extension to \(\mathbb{R}^3\) and in this section we generalize to \(\mathbb{R}^n\text{.}\) We’ll start with the notion of distance as the length of a vector. Consider first a vector
\begin{equation*} \vec{v} = \begin{bmatrix} v_1 \\ v_2 \end{bmatrix} \end{equation*}
in the plane (\(\mathbb{R}^2\)).
Figure 1.6.10. A vector starting at the origin showing the relationship to the coordinate sides.
and using plane geometry, the length of \(\vec{v}\text{,}\) denoted \(||\vec{v}||\) is the hypotenuse of the triangle or
\begin{equation*} ||\vec{v}|| = \sqrt{v_1^2+v_2^2} \end{equation*}
and if \(\vec{v}\) is in \(\mathbb{R}^3\text{,}\) the length would include a square of the third component inside the square root. Thus, we defined the length of any vector in \(\mathbb{R}^n\) to be the following.

Definition 1.6.11.

The length of a vector \(\vec{v} \in \mathbb{R}^n\) is given by
\begin{equation*} || \vec{v}|| = \sqrt{v_1^2 + v_2^2 + \cdots v_n^2 } \end{equation*}
which fits our expectations in \(\mathbb{R}^2\) and \(\mathbb{R}^3\text{.}\)

Example 1.6.12.

Find the length of the vector:
\begin{equation*} \vec{v} = \begin{bmatrix} 3 \\ 1 \\ 0 \\ -5 \end{bmatrix} \end{equation*}
Solution.
\begin{equation*} ||\vec{v}|| = \sqrt{9+1+0+25} = \sqrt{35} \end{equation*}

Subsection 1.6.8 Angles of vectors in \(\mathbb{R}^2\)

Consider first 2 vectors in \(\mathbb{R}^2\text{.}\) The angle between them would be the angle (measured in the counterclockwise direction) between the vectors if the starting point is anchored at the same place. For example:
Figure 1.6.13.
where
\begin{align*} \vec{u} \amp = \begin{bmatrix} 3 \\ 1 \end{bmatrix}, \amp \vec{v} \amp = \begin{bmatrix} -1 \\ 2 \end{bmatrix} \end{align*}
You can find the angle using knowledge of geometry. In this case, you can make a triangle by connecting the end points of each vector. Note that the third leg of the triangle can be written as
\begin{equation*} \vec{v}-\vec{u} = \begin{bmatrix} -1 \\ 2 \end{bmatrix} - \begin{bmatrix} 3 \\ 1 \end{bmatrix} = \begin{bmatrix} -4 \\ 1 \end{bmatrix} \end{equation*}
Figure 1.6.14.
and the lengths of the three sides are:
\begin{align*} ||\vec{u}|| \amp = \sqrt{10}, \amp ||\vec{v}|| \amp= \sqrt{5}, \amp ||\vec{v}-\vec{u}|| \amp = \sqrt{17} \end{align*}
and then using the law of cosines:
\begin{align*} ||\vec{v}-\vec{u}||^2 \amp = ||\vec{u}||^2 + || \vec{v}||^2 -2 ||\vec{u}|| \, ||\vec{v}| \cos \theta \\ 17 \amp = 10+5 - 2 \sqrt{10}\sqrt{5} \cos \theta \\ \frac{2}{-2 \sqrt{10}\sqrt{5}} \amp = \cos \theta \end{align*}
which in this case \(\theta \approx 98.13^{\circ}\text{.}\)
Note: since the range of \(\cos^{-1} \theta\) is \([0,\pi]\) (or \([0,180^{\circ}]\)), the angle will always be the angle between the vectors with this range. If you need the proper angle, you may need to subtract the result from \(360^{\circ}\text{.}\)

Subsection 1.6.9 Angles of vectors in \(\mathbb{R}^n\)

Understanding the above section allows us to extend the notion of vectors in \(n\) dimensions, with the key being the law of cosines:
\begin{equation*} ||\vec{u}-\vec{v}||^2 = ||\vec{u}||^2 + || \vec{v}||^2 -2 ||\vec{u}|| \, ||\vec{v}| \cos \theta \\ \end{equation*}
and if we expand this out for
\begin{align*} \vec{u} \amp = \begin{bmatrix} u_1 \\ u_2 \\ \vdots \\ u_n \end{bmatrix} \amp \vec{v} \amp = \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{bmatrix} \end{align*}
Then,
\begin{align*} ||\vec{u}-\vec{v}||^2 \amp = (u_1-v_1)^2 + (u_2-v_2)^2 + \cdots + (u_n -v_n)^2, \\ ||\vec{u}||^2 \amp = u_1^2 + u_2^2 + \cdots + u_n^2, \\ ||\vec{v}||^2 \amp = v_1^2 + v_2^2 + \cdots + v_n^2, \end{align*}
Expanding the top equation and subtracting the two below:
\begin{align*} ||\vec{u}-\vec{v}||^2-||\vec{u}||^2-||\vec{v}||^2 \amp = -2u_1v_1 - 2u_2v_2 - \cdots -2u_nv_n\\ \amp = -2 (u_1 v_1 + u_2 v_2 + \cdots + u_n v_n) \end{align*}
The term in the parentheses appears often throughout linear algebra and is called the dot product

Definition 1.6.15.

The dot product (or inner product) of the vectors \(\vec{u}\) and \(\vec{v}\) is defined as
\begin{equation*} \vec{u} \cdot \vec{v} = u_1v_1 + u_2v_2 + \cdots +u_nv_n \end{equation*}
Note: the dot product between two vectors results in a number (a scalar). Also, the dot product is only defined between two vectors of the same length. Also, for any vector \(\vec{u}\text{,}\) there is a nice relationship between the length and the dot product:
\begin{equation*} \vec{u} \cdot \vec{u} = ||\vec{u}||^2 \end{equation*}
Again, returning to law of cosines and solving for \(\cos \theta\text{:}\)
\begin{align*} \cos \theta \amp = \frac{||\vec{u}-\vec{v}||^2-||\vec{u}||^2-||\vec{v}||^2}{-2 ||\vec{u}|| \, ||\vec{v}||}\\ \amp = \frac{-2 \vec{u} \cdot \vec{v}}{-2 ||\vec{u}|| \, ||\vec{v}||} \\ \amp = \frac{\vec{u} \cdot \vec{v}}{||\vec{u}|| \, ||\vec{v}||} \end{align*}

Subsection 1.6.10 Properties of The Dot Product

As we will see, the dot product is an extremely important concept. Before going on, we show the properties of the dot product.
Commutative
\(\vec{u} \cdot \vec{v} = \vec{v} \cdot \vec{u}\text{,}\)
Distributative
\(\vec{u} \cdot (\vec{v} + \vec{w}) = \vec{u} \cdot \vec{v} + \vec{u} \cdot \vec{w}\text{.}\)
Associative
\(\vec{u} \cdot (r\vec{v}) = (r \vec{u}) \cdot \vec{v} = r (\vec{u} \cdot \vec{v})\text{.}\)
These properties can be shown using the Definition above.

Proof.

First, note that if either \(\vec{u} = \vec{0}\) or \(\vec{v} = \vec{0}\text{,}\) then then (1.6.1) holds.
Therefore, assume that neither \(\vec{u}\) nor \(\vec{v}\) is the zero vector. Then create the vector
\begin{equation*} ||\vec{u}|| \vec{v} - ||\vec{v}||\vec{u}. \end{equation*}
The square of the length of this is nonnegative.
\begin{align} 0 \leq \amp ||(||\vec{u}|| \vec{v} - ||\vec{v}||\vec{u})||^2 \notag\\ \amp = (||\vec{u}|| \vec{v} - ||\vec{v}||\vec{u}) \cdot (||\vec{u}|| \vec{v} - ||\vec{v}||\vec{u})\notag\\ \amp = (||\vec{u} || \vec{v}) \cdot (||\vec{u}|| \vec{v}) - (||\vec{u}|| \vec{v}) \cdot (||\vec{v}|| \vec{u}) - (||\vec{v} || \vec{u}) \cdot (||\vec{u}|| \vec{v}) + (||\vec{v}|| \vec{u}) \cdot (||\vec{v}|| \vec{u})\notag\\ \amp \qquad \qquad\text{using properties of the dot product}\notag\\ \amp = ||\vec{u}||^2 (\vec{v} \cdot \vec{v}) - 2 (||\vec{u}||\vec{v} \cdot ||\vec{v}|| \vec{u}) + ||\vec{v}||^2 (\vec{u} \cdot \vec{u})\notag\\ \amp = \leq ||\vec{u}||^2 ||\vec{v}||^2 - 2 ||\vec{u}|| \, ||\vec{v}|| (\vec{v} \cdot \vec{u}) + ||\vec{v}||^2 ||\vec{v}||^2\notag\\ \amp = \qquad \qquad \text{divide through by $||\vec{u}|| \, ||\vec{v}||$} \notag\\ \amp \leq 2||\vec{u}|| \, ||\vec{v}|| - 2 (\vec{v} \cdot \vec{u}) \tag{1.6.2} \end{align}
Adding \(||\vec{u} + \vec{v}||^2 = (\vec{u}+\vec{v}) \cdot (\vec{u}+\vec{v})\) to both sides
\begin{align*} ||\vec{u} +\vec{v}||^2 \amp \leq (\vec{u}+\vec{v}) \cdot (\vec{u}+\vec{v}) + 2||\vec{u}|| ||\vec{v}|| - 2 (\vec{v} \cdot \vec{u}) \\ \amp = \vec{u} \cdot \vec{u} + 2 \vec{u} \cdot \vec{v} + \vec{v} \cdot \vec{v} + 2||\vec{u}|| ||\vec{v}|| - 2 (\vec{v} \cdot \vec{u}) \\ \amp = ||\vec{u}||^2 + 2 ||\vec{u}||\,||\vec{v}|| + ||\vec{v}||^2\\ \amp = (||\vec{u}|| + ||\vec{v}||)^2 \end{align*}
and lastly, taking the square root of both sides
\begin{equation*} ||\vec{u} + \vec{v}|| \leq ||\vec{u}|| + ||\vec{v}|| \end{equation*}
To show equality, assume that \(||\vec{v}|| \neq 0\text{,}\)
\begin{align*} ||\vec{u}||\vec{v} - ||\vec{v}|| \vec{u} \amp = 0 \\ \text{or} \qquad \qquad \amp \\ \vec{u} \amp = \frac{||\vec{u}||}{||\vec{v}||} \vec{v} \end{align*}
therefore \(\vec{u}\) is a scalar multiple of \(\vec{v}\text{.}\)
This can be visualized by considering the plane in which \(\vec{u}\) and \(\vec{v}\) lie (and note that regardless of the value of \(n\text{,}\) they will lie in a plane or planar subset of \(\mathbb{R}^n\text{.}\))
Figure 1.6.17. A plot showing the triangle inequality for vectors in \(\mathbb{R}^2\)
The vector \(\vec{u}+\vec{v}\) is one side of the triangle and we know that any one side must always be no larger than the sum of the other two.

Proof.

A consequence of the Triangle Inequality is (1.6.2), thus the Cauchy-Swartz Inequality holds if \(\vec{u} \cdot \vec{v} \gt 0\text{.}\) Assume that \(\vec{u} \cdot \vec{v} \lt 0\text{.}\) Then
\begin{align*} \vec{u} \cdot \vec{v} = - (\vec{u} \cdot \vec{v}) = (-\vec{u} \cdot \vec{v}) \amp \leq ||-\vec{u}|| ||\vec{v}|| \\ \text{and since} \; ||-\vec{u}|| = ||\vec{u}|| \qquad \amp \\ \amp = ||\vec{u}|| \, ||\vec{v}|| \end{align*}
For equality, assume that \(\vec{u} = k \vec{v}\) for some real number \(k\text{.}\) Then
\begin{align*} |\vec{u}\cdot \vec{v} | \amp = |k \vec{v} \cdot \vec{v}| \\ \amp = |k| \; |\vec{v} \cdot \vec{v} | \\ \amp = |k| \; || \vec{v} ||^2 \end{align*}
and since these steps are reversible, this hold for if and only if.
We now use the dot product to define the angle between two vectors in \(\mathbb{R}^n\text{.}\)

Definition 1.6.19.

The angle between vectors \(\vec{u}\) and \(\vec{v}\) is the value of \(\theta\) that satisfies:
\begin{equation*} \cos \theta = \frac{ \vec{u} \cdot \vec{v}}{||\vec{u}||\, ||\vec{v}||} \end{equation*}
So now with any two vectors, the above expression is a way to define the angle between them.

Example 1.6.20.

Find the angle between
\begin{align*} \vec{u} \amp = \begin{bmatrix} 2 \\ -4 \\ 1 \\ 1 \end{bmatrix} \amp \vec{v} \amp = \begin{bmatrix} 3 \\ 2 \\ 0 \\ 5 \end{bmatrix}. \end{align*}
Solution.
\begin{align*} ||\vec{u}|| \amp =\sqrt{22}, \amp ||\vec{v}|| \amp = \sqrt{38} \end{align*}
\begin{equation*} \vec{u} \cdot \vec{v} = 6-8+0+5 = 3 \end{equation*}
and therefore the angle can be found by
\begin{equation*} \cos \theta = \frac{3}{\sqrt{22}\sqrt{38}} \approx 0.1037571696 \end{equation*}
and \(\theta \approx 84.04^{\circ}\text{.}\) And although it’s difficult to visualize these vectors, we can imagine the angle between them.
One of the most important angles in geometry is \(\theta=90^{\circ}\text{,}\) which occurs in right triangles and perpendicular lines. In terms of vectors, we use the dot product to define this.

Definition 1.6.21.

Two vectors \(\vec{u}\) and \(\vec{v}\) are perpendicular or orthogonal if their dot product is 0.