Skip to main content

Applied Mathematics

Section 2.8 Determinants of Square Matrices

Recall from SectionΒ 2.6 that we saw two examples. ExampleΒ 2.6.2 showed that the vectors
\begin{equation*} \left\{ \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 \\ 1 \end{bmatrix}\right\} \end{equation*}
is a linear independent set and
\begin{equation*} \left\{ \begin{bmatrix} 1 \\ 2 \end{bmatrix}, \begin{bmatrix} 2 \\ 4 \end{bmatrix}\right\} \end{equation*}
is a set of linear dependent vectors in \(\mathbb{R}^2\text{.}\)
In general, if we have two vectors
\begin{equation*} \boldsymbol{u} = \begin{bmatrix} a \\ c\end{bmatrix}, \qquad \boldsymbol{v}= \begin{bmatrix} b\\ d \end{bmatrix} \end{equation*}
we can determine if they are linearly dependent or independent by solving
\begin{equation*} k_1 \boldsymbol{u} + k_2 \boldsymbol{v} = \boldsymbol{0}. \end{equation*}
To determine linear independence, we need to solve for \(k_1\) and \(k_2\text{.}\) This can be found with the augmented matrix:
\begin{equation*} \left[ \begin{array}{rr|r} a \amp c \amp 0 \\ b \amp d \amp 0 \end{array}\right] \end{equation*}
Row-reducing
\begin{equation*} -c R_1 + a R_2 \to R_2 \qquad \left[ \begin{array}{rr|r} a \amp b \amp 0 \\ 0 \amp -c b + ad \amp 0 \end{array}\right] \end{equation*}
This system has a unique solution (of \(k_1=k_2=0\)) if \(ad-bc \neq 0\) and an infinite set of solutions if \(ad-bc =0\text{.}\) Analogous situations occur in other parts of linear algebra (like the inverse matrix) that are equivalent to this and so we name this function from matrices to the reals. For
\begin{equation} A = \begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix}\tag{2.8.1} \end{equation}
define \(\det(A) = ad-bc\text{.}\)
This section expands the definition to other matrix sizes and lists other properties of the determinant.

Subsection 2.8.1 Definition of the Determinant

Before formally defining the determinant for a general matrix, there is some other needed background. We first need to understand a permutation of a set of integers. In short a permutation is a shuffling of items. In the context of determinants, we need the items to be the first \(n\) integers.
For example, there are six permutations of \(\{1,2,3\}\text{,}\) specifically \((1,2,3), (1,3,2), (2,1,3), (2,3,1), (3,1,2)\) and \((3,2,1)\text{.}\) Mathematically, we often think of a single permutation as a function from the set \(\{1,2,\ldots, n\}\) to itself. For example, for the permutation \((2,3,1)\text{,}\) the function \(\sigma\) is
\begin{equation*} \sigma(1) = 2 \qquad \sigma(2) = 3 \qquad \sigma(3)=1. \end{equation*}
It is important to know that there are \(n!\) permutations of the set \(\{1,2,\ldots,n\}\text{.}\) Also, any permutation can be built from a number of swaps of the trivial permutation \((1,2,\ldots,n)\text{.}\) For example in the example above \((2,3,1)\) can be created by starting with \((1,2,3)\) and swapping the first two elements to get \((2,1,3)\) and then swapping the last two elements to get \((2,3,1).\)

Definition 2.8.1.

A permutation \(\sigma\) is even if it can be generated from the trivial permutation with an even number of swaps and odd if it can be generated with an odd number of swaps. Also the sign of the permutation, denoted \(\text{sgn}\) is
\begin{equation*} \sgn(\sigma) = \begin{cases} 1 \amp \text{if $\sigma$ is even} \\ -1 \amp \text{if $\sigma$ is odd.} \end{cases} \end{equation*}

Definition 2.8.2.

For an \(n \times n\) matrix, \(A\text{,}\) the determinant is
\begin{equation} \det(A) = \sum_{\sigma} \sgn(\sigma) a_{1,\sigma(1)} a_{2,\sigma(2)} \cdots a_{n,\sigma(n)}\tag{2.8.2} \end{equation}
where the sum is over all permutations of \(n\text{.}\)
Although this defintion works for square matrices, let’s ground ourselves a bit more before moving on. First, if we have the case of a \(1 \times 1\) matrix, then
\begin{equation*} \det(A) = \det([a_{1,1}]) = \sum_{\sigma} a_{1,\sigma(1)} = a_{1,1} \end{equation*}
where we have used the fact that there is one permutation of the set \(\{1\}\text{,}\) resulting in the scalar that is the only entry.
Recall that there are two permutations of \(\{1,2\}\) and that is \((1,2)\) and \((2,1)\text{,}\) so for a \(2 \times 2\) matrix, we have
\begin{equation*} \begin{aligned}\det\left(\begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix}\right) \amp = \sgn((1,2)) a_{1,\sigma(1)}a_{2,\sigma(2)} + \sgn((2,1)) a_{1,\sigma(1)}a_{2,\sigma(2)} \\ \amp = (1) a_{1,1} a_{2,2} + (-1) a_{1,2} a_{2,1} = ad-bc \end{aligned} \end{equation*}
where recall that the 2nd term has the permutation \(\sigma(1)=2, \sigma(2)=1\text{.}\) This is identical to the \(2 \times 2\) example above.
It may seem like now that we have the definition in DefinitionΒ 2.8.2, we can calculate the determinant of any square matrix. The following example shows how to calculate it with a \(3 \times 3\) matrix:

Example 2.8.3.

Use the definition in DefinitionΒ 2.8.2 to find \(\det(A)\) if
\begin{equation*} A = \begin{bmatrix} 3 \amp 2 \amp -1 \\ 0 \amp 2 \amp 1 \\ 1 \amp 0 \amp -2 \end{bmatrix} \end{equation*}
Solution.
Note that the permutations of \(\{1,2,3\}\) are those that are above and for a general \(3 \times 3\) matrix \(A\)
\begin{equation} \begin{aligned} \det(A) \amp = a_{1,1} a_{2,2} a_{3,3} - a_{1,1}a_{2,3}a_{3,2} - a_{1,2} a_{2,1} a_{3,3} \\ \amp \qquad + a_{1,2}a_{2,3}a_{3,1} + a_{1,3} a_{2,2} a_{3,1} - a_{1,3}a_{2,1}a_{3,2} \end{aligned}\tag{2.8.3} \end{equation}
and then evaluating it with the given element of \(A\)
\begin{equation*} \begin{aligned} \det(A) \amp = (1)(3)(2)(-2) + (-1)(3)(1)(0) + (-1)2(0)(2) + (1)2(1)(1) \\ \amp \qquad +(1)(-1)(2)(1) + (-1)(-1)(0)(0) \\ \amp = -12 +2 -2 = -12 \end{aligned} \end{equation*}
This example isn’t too bad. However, due to the nature of permutations, the formula in (2.8.2) is unwieldy. For a 5 by 5 matrix, there would be \(5!=120\) terms. Fortunately, there are other formulas available to perform the calculation. One thing to notice is using a little factoring, we can simplify the formula in (2.8.3) and make it easier to compute. This will be done formally below.
Before presenting other computational methods of the determinant, let’s look at the properties of the determinant.

Subsection 2.8.2 Basic Properties of the Determinant

We start with a few basic properties of the determinant that will help us with the deeper understanding of determinants and how to calculate a few basic determinants.

Proof.

Let \(A\) be an upper-diagonal matrix. That is \(a_{i,j}=0\) if \(j \lt i\text{.}\) Then
\begin{equation*} \det(A) = \sum_{\sigma} \sgn(\sigma) a_{1,\sigma(1)} a_{2,\sigma(2)} \cdots a_{n,\sigma(n)} \end{equation*}
and the only case where there is not a term \(a_{i,j}\) with \(j \lt i \) is the trivial permutation. Therefore,
\begin{equation*} \det(A) = \sum_{\sigma} \sgn(\sigma) a_{1,1} a_{2,2}\cdots a_{n,n} = a_{1,1}a_{2,2} \cdots a_{n,n} \end{equation*}
with \(\sgn(\sigma) =1\text{.}\)

Proof.

Proof.

Proof.

Let \(\sigma'\) be the permutation resulting from swapping positions \(i\) and \(j\) in permutation \(\sigma\text{.}\)
Also, since \(\sigma\) and \(\sigma'\) differ by a row swap
\begin{equation*} \sgn(\sigma) = -\sgn(\sigma') \end{equation*}
\begin{equation*} \begin{aligned} \det(A') \amp = \sum_{\sigma'} \sgn(\sigma') a_{1,\sigma'(1)} \cdots a_{j, \sigma'(j)} \cdots a_{i,\sigma'(i)} \cdots a_{n,\sigma'(n)} \\ \amp = \sum_{\sigma'} \sgn(\sigma') a_{1,\sigma'(1)} \cdots a_{i, \sigma'(i)} \cdots a_{j,\sigma'(j)} \cdots a_{n,\sigma'(n)} \\ \amp = \sum_{\sigma'} (-\sgn(\sigma)) a_{1,\sigma(1)} \cdots a_{i, \sigma(i)} \cdots a_{j,\sigma(j)} \cdots a_{n,\sigma(n)} \\ \amp = - \sum_{\sigma} \sgn(\sigma) a_{1,\sigma(1)} \cdots a_{i, \sigma(i)} \cdots a_{j,\sigma(j)} \cdots a_{n,\sigma(n)} \\ \amp = - \det(A) \end{aligned} \end{equation*}
and note that summing over \(\sigma\) amd \(\sigma'\) is the same.
The proof of this is left to the reader.
We will see below that elementary matrices play an important role in determinants. We also saw in LemmaΒ 2.8.7 that performing a row swap on a matrix, changes it’s sign. The next pair of lemmas are also related to row operations.

Proof.

\begin{equation*} \begin{aligned} \det(A') \amp = \sum_{\sigma} a_{1,\sigma(1)} \cdots (ca_{i,\sigma(i)}) \cdots a_{n, \sigma(n)} \\ \amp = c \sum_{\sigma} a_{1,\sigma(1)} \cdots a_{i,\sigma(i)} \cdots a_{n, \sigma(n)} \\ \amp = c \det(A) \end{aligned} \end{equation*}
The following performs a row operation \(R_i + cR_j \to R_j\text{.}\) Surprisingly, this does not change the determinant.

Proof.

\begin{equation*} \begin{aligned} \det(A') \amp = \sum_{\sigma} a_{1,\sigma(1)} \cdots (a_{i,\sigma(i)} + c a_{j,\sigma(i)}) \cdots a_{j, \sigma(j)} \cdots a_{n, \sigma(n)} \\ \amp = \sum_{\sigma} a_{1,\sigma(1)} \cdots a_{i,\sigma(i)}\cdots a_{j, \sigma(j)} \cdots a_{n, \sigma(n)} + c \sum_{\sigma} a_{1,\sigma(1)} \cdots a_{j,\sigma(j)} \cdots a_{j, \sigma(j)} \cdots a_{n, \sigma(n)} \\ \amp = \det(A) \end{aligned} \end{equation*}
where the property that a matrix with two identical rows has determinant 0 in the second sum of the 2nd step.

Subsection 2.8.3 Determinants of Elementary Matrices

Recall that the Elementary Matrices have the property that multiplying a matrix by such a matrix results in a row operation as we saw in SectionΒ 2.4. We will use these matrices to provide additional properties of determinants.
The proofs of these are related to those above and the formalization of them is left to the reader.
Elementary matrices play a fundamental role related to determinants. This next set of lemmas shows that if a matrix is transformed using a elementary matrix, then the determinant of the product of the matrices is the product of the determinants.

Proof.

We now see that the same result works for multiplying a row in a matrix by a constant.

Proof.

This last lemma shows a similar result for the row multiplication with an addition.

Proof.

Proof.

Let \(A'\) and \(B'\) be the reduced row-echelon forms of \(A\) and \(B\) respectively. Let \(E_1, E_2, \ldots E_k\) and \(F_1, F_2, \ldots, F_{\ell}\) be the elementary matrices such that
\begin{equation*} A = E_1 E_2 \cdots E_k A' \qquad B = F_1 F_2 \cdots F_{\ell}B' \end{equation*}
(see lemma ???). If \(A' = I\text{,}\) that is \(A\) is invertible then
\begin{equation*} \begin{aligned} \det(AB) \amp = \det(E_1 E_2 \cdots E_k I F_1 F_2 \cdots F_{\ell}B') \\ \amp = \det(E_1) \det(E_2) \cdots \det(E_k) \cdot \det(I) \det(F_1) \det(F_2) \cdots \det(F_{\ell})B' \\ \amp = \det(A) \det(B) \end{aligned} \end{equation*}
If \(\det(A) = 0\text{,}\) then \(A'\) is not the identity matrix and there must be a row of zeros. Call this row \(i\)
\begin{equation*} \begin{aligned} \det(AB) \amp = \det(E_1 E_2 \cdots E_k A' B) \\ \amp = \det(E_1) \det(E_2) \cdots \det(E_k) \det(A'B) \\ \amp = \det(E_1) \det(E_2) \cdots \det(E_k) \det(E_{cR_i} A'B) \\ \end{aligned} \end{equation*}
where multiplying \(A'\) by the elementary matrix \(E_{cR_i}\) does not change it since there are only 0s in row \(i\text{.}\)
\begin{equation*} \begin{aligned} \amp = \det(E_1) \det(E_2) \cdots \det(E_k) c \det(A'B) \end{aligned} \end{equation*}
Since this shows that \(\det(AB) = c \det(AB)\) for all values of \(c\text{,}\) then \(\det(AB) =0 \) and since \(\det(A)=0\text{,}\) then the result follows.

Subsection 2.8.4 Gauss’ Method for Calculating Determinants

We saw how to calculate the determinant for very specific matrices. However, up until this point, we should use the definition if the matrix doesn’t have any particular structure. However, Gauss’ Method for Calculating Determinants uses LemmaΒ 2.8.7, LemmaΒ 2.8.9 and LemmaΒ 2.8.10 to simplify a matrix and then take the determinant. Generally, row operations are performed to get a matrix in upper-triangular (or echelon) form and then use LemmaΒ 2.8.4. The following examples show how to use the method.

Example 2.8.17.

Find the determinant of the following matrix using a) the formula for \(2 \times 2\) determinants and b) using Gauss’ method.
\begin{equation*} T = \begin{bmatrix} 3 \amp 2 \\ 1 \amp -2 \end{bmatrix} \end{equation*}
Solution.
Using the formula \(|T| = ad-bc=-6-2=-8\text{.}\)
Using Gauss’s method,
\begin{align*} \qquad |T| \amp= \begin{vmatrix} 3 \amp 2 \\ 1 \amp -2 \end{vmatrix} \\ R_1 \leftrightarrow R_2 \qquad -|T|\amp= \begin{vmatrix} 1 \amp -2 \\ 3 \amp 2 \end{vmatrix} \\ -3R_1 + R_2 \rightarrow R_2 \qquad -|T|\amp= \begin{vmatrix} 1 \amp -2 \\ 0 \amp 8 \end{vmatrix} = 8 \end{align*}
So \(|T|=-8\text{.}\)
This shows that although Gauss’ method succeeds in finding the determinant, it takes more operations than the simple formula.

Example 2.8.18.

Use Gauss’s method to find the determinants of the following matrices:
\begin{align*} T \amp = \begin{bmatrix} 3 \amp 0 \amp 2 \\ 1 \amp 4 \amp 0 \\ 0 \amp 2 \amp 5 \end{bmatrix} \amp S \amp = \begin{bmatrix} 0 \amp 1 \amp 3 \amp -4 \\ 2 \amp 0 \amp 2 \amp 7 \\ 0 \amp 0 \amp 6 \amp 8 \\ 1 \amp 0 \amp 10 \amp 6 \end{bmatrix} \end{align*}
Solution.
For both examples, we use row operations and keep track of any row swaps (introducing a \(-1\)) or multiples.
  1. \begin{align*} |T| = \amp \begin{vmatrix} 3 \amp 0 \amp 2 \\ 1 \amp 4 \amp 0 \\ 0 \amp 2 \amp 5 \end{vmatrix}\\ R_1 \leftrightarrow R_2 \qquad -|T|= \amp \begin{vmatrix} 1 \amp 4 \amp 0 \\ 3 \amp 0 \amp 2 \\ 0 \amp 2 \amp 5 \end{vmatrix} \\ -3 R_1 + R_2 \rightarrow R_2 \qquad -|T| = \amp \begin{vmatrix} 1 \amp 4 \amp 0 \\ 0 \amp -12 \amp 2 \\ 0 \amp 2 \amp 5 \end{vmatrix} \\ R_2 \leftrightarrow R_3 \qquad |T|= \amp \begin{vmatrix} 1 \amp 4 \amp 0 \\ 0 \amp 2 \amp 5 \\ 0 \amp -12 \amp 2 \\ \end{vmatrix} \\ 6 R_2 + R_3 \rightarrow R_3 \qquad |T| = \amp \begin{vmatrix} 1 \amp 4 \amp 0 \\ 0 \amp 2 \amp 5 \\ 0 \amp 0 \amp 32 \\ \end{vmatrix} = 64 \end{align*}
  2. \begin{align*} |S| \amp = \begin{vmatrix} 0 \amp 1 \amp 3 \amp -4 \\ 2 \amp 0 \amp 2 \amp 7 \\ 0 \amp 0 \amp 6 \amp 8 \\ 1 \amp 0 \amp 10 \amp 6 \end{vmatrix} \\ R_1 \leftrightarrow R_4 \qquad -|S| \amp = \begin{vmatrix} 1 \amp 0 \amp 10 \amp 6 \\ 2 \amp 0 \amp 2 \amp 7 \\ 0 \amp 0 \amp 6 \amp 8 \\ 0 \amp 1 \amp 3 \amp -4 \\ \end{vmatrix} \\ -2R_1 + R_2 \rightarrow R_2 \qquad -|S| \amp = \begin{vmatrix} 1 \amp 0 \amp 10 \amp 6 \\ 0 \amp 0 \amp -18 \amp -5 \\ 0 \amp 0 \amp 6 \amp 8 \\ 0 \amp 1 \amp 3 \amp -4 \\ \end{vmatrix} \\ R_2 \leftrightarrow R_4 \qquad |S| \amp= \begin{vmatrix} 1 \amp 0 \amp 10 \amp 6 \\ 0 \amp 1 \amp 3 \amp -4 \\ 0 \amp 0 \amp 6 \amp 8 \\ 0 \amp 0 \amp -18 \amp -5 \\ \end{vmatrix} \\ 3 R_3 + R_4 \rightarrow R_4 \qquad |S| \amp = \begin{vmatrix} 1 \amp 0 \amp 10 \amp 6 \\ 0 \amp 1 \amp 3 \amp -4 \\ 0 \amp 0 \amp 6 \amp 8 \\ 0 \amp 0 \amp 0 \amp 19 \\ \end{vmatrix} = 6 (19) = 114 \end{align*}

Subsection 2.8.5 Expansion Method for finding the Determinant

Although Gauss’ method is a very robust and in general efficient method for finding determinants, a method called the Laplace Expansion method can be quite helpful at times as well. Before defining this, we need to know a matrix minor and cofactor first.

Definition 2.8.19.

For any \(n\times n\) matrix \(T\) , the \((n - 1)\times(n - 1)\) matrix formed by deleting row \(i\) and column \(j\) of \(T\) is the \(i,j\) minor of \(T\). The \(i,j\) cofactor \(T_{i,j}\) of \(T\) is \((-1)^{i+j}\) times the determinant of the \(i, j\) minor of \(T\) and denoted \((-1)^{i+j} |T_{i,j}|\text{.}\)

Example 2.8.20.

Find the \(T_{1,1}\) and \(T_{2,3}\) minors and cofactors of the matrix
\begin{equation*} T = \begin{bmatrix} 1 \amp 2 \amp 3 \\ 4 \amp 5 \amp 6 \\ 7 \amp 8 \amp 9 \end{bmatrix} \end{equation*}
Solution.
Recall that \(T_{i,j}\) minor is found by removing the \(i\)th row and \(j\)th column or
\begin{align*} T_{1,1} \amp = \begin{bmatrix} 5 \amp 6 \\ 8 \amp 9 \end{bmatrix} \amp T_{2,3} \amp = \begin{bmatrix} 1 \amp 2 \\ 7 \amp 8 \end{bmatrix} \end{align*}
and the cofactors are the determinants of each of these matrices times \((-1)^{i+j}\) or
\begin{align*} (-1)^{1+1} |T_{1,1}| \amp= (1) (45-24) = 21 \\ (-1)^{2+3} |T_{2,3}| \amp = (-1) (8-14) = 6 \end{align*}
Now that we have the prerequisites, the following is the Laplace Expansion method for finding a determinant.

Example 2.8.22.

Use the expansion formula to find the determinants of the matrices in ExampleΒ 2.8.18, namely
\begin{align*} T \amp = \begin{bmatrix} 3 \amp 0 \amp 2 \\ 1 \amp 4 \amp 0 \\ 0 \amp 2 \amp 5 \end{bmatrix} \amp S \amp = \begin{bmatrix} 0 \amp 1 \amp 3 \amp -4 \\ 2 \amp 0 \amp 2 \amp 7 \\ 0 \amp 0 \amp 6 \amp 8 \\ 1 \amp 0 \amp 10 \amp 6 \end{bmatrix} \end{align*}
Solution.
In the case of \(T\text{,}\) we will expand across the first row and use the formula for the \(2\times 2\) determinant.
\begin{align*} |T| \amp = (-1)^{1+1} (3) \begin{vmatrix}4 \amp 0 \\ 2 \amp 5 \end{vmatrix} + (-1)^{1+2} (0) \begin{vmatrix} 1 \amp 0 \\ 0 \amp 5 \end{vmatrix} + (-1)^{1+3} (2) \begin{vmatrix} 1 \amp 4 \\ 0 \amp 2 \end{vmatrix} \\ \amp = 3 (20) + (2) (2-0) = 64 \end{align*}
and for \(S\text{,}\) we’ll expand down the 2nd column because all but one is zero. And because of this, I won’t show the cofactors of \(T_{1,2}, T_{1,3}\) and \(T_{1,4}\text{.}\)
\begin{align*} |S| \amp = (-1)^{1+2} (1) \begin{vmatrix} 2 \amp 2 \amp 7 \\ 0 \amp 6 \amp 8 \\ 1 \amp 10 \amp 6 \end{vmatrix} + 0 + 0 + 0 \end{align*}
and now to find this \(3\times 3\) determinant, expand about the 2nd row
\begin{align*} |S| \amp = (-1) \bigl( (-1)^{2+2} (6) \begin{vmatrix} 2 \amp 7 \\ 1 \amp 6 \end{vmatrix} + (-1)^{2+3} (8) \begin{vmatrix} 2 \amp 2 \\ 1 \amp 10 \end{vmatrix} \bigr) \end{align*}
and now use the formula for \(2 \times 2\) determinants.
\begin{equation*} |S| = -(6 (12-7) - 8 (20-2)) = -(30- 144) =114 \end{equation*}

Subsection 2.8.6 Geometry of Determinants

In the previous section, the determinant was introduced as a function that determines whether or not a matrix was singular due to whether or not the function was 0. In this section, we will look at a geometric approach to the determinant and show that it can be used to determine areas (and volumes) of regions bounded by vectors. We will show that this geometric approach is identical (in the two-dimensional case) as the properties in DefinitionΒ 2.8.2.
Consider the parallelogram formed by two vectors. In the argument below, it is important that the vector \(\langle x_1, y_1 \rangle\) is below and to the right of the vector \(\langle x_2, y_2 \rangle\text{.}\)
Figure 2.8.23. Plot of two vectors in \(\mathbb{R}^2\) forming a parallelogram.
The area of the parallelogram can be determined by taking the area of the enclosing rectangle and subtracting out the rectangles \(A\) and \(F\) and triangles \(B, C, D\) and \(E\) as shown below:
Figure 2.8.24. Finding the area of the parallelogram
\begin{align*} \text{area of parallelogram} \amp = \text{area of enclosing rect} \\ \amp \qquad - \text{area of rectangle $A$} - \text{area of triangle $B$} \\ \amp \qquad \cdots - \text{area of rectangle $F$} \\ \amp = (x_1+x_2)(y_1+y_2) - x_2 y_1 - \frac{1}{2} x_1 y_1 \\ \amp \qquad - \frac{1}{2} x_2 y_2 - \frac{1}{2} x_2 y_2 - \frac{1}{2} x_1 y_1 - x_2 y_1 \\ \amp = x_1 y_2 - x_2 y_1 \end{align*}
and note that
\begin{equation*} \begin{vmatrix} x_1 \amp x_2 \\ y_1 \amp y_2 \end{vmatrix} = x_1 y_2 - x_2 y_1 \end{equation*}
And this result is identical to the determinant seen above. Again, as noted, the vectors were set up to have a positive area, however in general, one can define the area as the absolute value of the determinant.

Subsubsection 2.8.6.1 Transformation of the Vectors and the size of the Parallelogram

From above, the area of the parallelogram is the determinant of the vectors that are along the sides.
Consider two vectors in \(\mathbb{R}^2\) and rotate them so one is on the \(x\)-axis. Also take \(\boldsymbol{u}\) and multiply it by a factor of \(k\)
Figure 2.8.25. Scaling a parallelogram
From this geometric argument, the area of the parallelogram formed by the vectors \(\boldsymbol{v}\) and \(k\boldsymbol{u}\) appears to \(k\) times larger. This is property 3 of Definition DefinitionΒ 2.8.2.
Next, let’s look at transformation \(\boldsymbol{u} + k\boldsymbol{v}\text{.}\) The picture on the left is the original two vectors and that on the right is the transformed vectors (with \(k\) about 0.2 in this picture). The original area and the transformed area are identical in this case since neither the height of the parallelogram nor its width has changed.
Figure 2.8.26. Skewing a parallelogram
This property shows that replacing a row with a constant times another row plus the current row results in an unchanged area is consistent with property 1 of DefinitionΒ 2.8.2.
The other transformation related to the determinant is property 2 of DefinitionΒ 2.8.2 or in other words, if one switched the order of the vectors (row swaps), that the determinant changes sign. The area does not change because the area is the absolute value of the determinant.
Definition 2.8.27.
In \(\mathbb{R}^n\text{,}\) the parallelepiped formed by \(\langle \boldsymbol{v}_1, \boldsymbol{v}_2, \ldots, \boldsymbol{v}_n \rangle\) includes the set
\begin{equation*} \{ t_1 \boldsymbol{v}_1 + t_2 \boldsymbol{v}_2 + \cdots + t_n \boldsymbol{v}_n\; | \; t_1, t_2, \ldots, t_n \in [0,1] \} \end{equation*}
The volume of the parallelepiped is the absolute value of the determinant of the matrix whose columns are \(\boldsymbol{v}_1, \boldsymbol{v}_2, \ldots, \boldsymbol{v}_n\text{.}\)
Example 2.8.28.
Find the volume of the parallelepiped formed by the vectors:
\begin{equation*} \begin{bmatrix} 3 \\ 0 \\ 2 \end{bmatrix}, \begin{bmatrix} -1 \\ 2 \\ 0 \end{bmatrix}, \begin{bmatrix} 2 \\ 3 \\ 1 \end{bmatrix} \end{equation*}
Solution.
The volume is the absolute value of the determinant of the matrix with these three columns. We’ll use Gauss’ method to find the determinant.
\begin{align*} |A| \amp= \begin{vmatrix} 3 \amp -1 \amp 2 \\ 0 \amp 2 \amp 3 \\ 2 \amp 0 \amp 1 \end{vmatrix} \\ 3R_3 \rightarrow R_3 \qquad 3|A| \amp= \begin{vmatrix} 3 \amp -1 \amp 2 \\ 0 \amp 2 \amp 3 \\ 6 \amp 0 \amp 3 \end{vmatrix} \\ -2 R_1 + R_3 \rightarrow R_3 \qquad 3|A| \amp= \begin{vmatrix} 3 \amp -1 \amp 2 \\ 0 \amp 2 \amp 3 \\ 0 \amp 2 \amp -1 \end{vmatrix} \\ -R_2 + R_3 \rightarrow R_3 \qquad 3|A| \amp= \begin{vmatrix} 3 \amp -1 \amp 2 \\ 0 \amp 2 \amp 3 \\ 0 \amp 0 \amp -4 \end{vmatrix} \end{align*}
and multiplying down the diagonal, \(3|A| = -24\text{,}\) so \(|A|=-8\text{.}\) This means that the volume is 8 units.

Subsection 2.8.7 Other Properties of Determinants

This section has a number of other properties of determinants. We start by showing that a matrix and it’s transpose have the same determinant.

Proof.

We first start with using the properties of elementary matrices
  • \(\displaystyle \det(E_{i \leftrightarrow j}) = \det(E_{i \leftrightarrow j}^{\intercal})\)
  • \(\displaystyle \det(E_{cR_i}) = \det(E_{cR_i}^{\intercal})\)
  • \(\displaystyle \det(E_{cR_i + R_j}) = \det(E_{cR_i + R_j}^{\intercal})\)
where proving these are left to the reader. That is for any elementary matrix, \(\det(E) = \det(E^{\intercal})\)
The proof comes down to two cases. The first is when the matrix is invertible or \(\det(A) \neq 0\text{.}\) The second case is when \(A\) is not invertible.
  • Case 1: \(A\) is invertible.
    From LemmaΒ 2.4.6, any invertible matrix \(A\) can be written as the product of elementary matrices. Assume that \(A = E_1 E_2 \cdots E_k\text{.}\) First note that
    \begin{equation*} \begin{aligned} \det(A) \amp = \det(E_1 E_2 \cdots E_k ) \\ \amp = \det(E_1) \det(E_2) \cdot \det(E_k) \end{aligned} \end{equation*}
    Now, evaluate the determinant of \(A^{\intercal}\) with
    \begin{equation*} \begin{aligned} \det(A^{\intercal}) \amp = \det((E_1 E_{2} \cdots E_k)^{\intercal}) \\ \amp = \det(E_k^{\intercal}E_{k-1}^{\intercal} \cdots E_1^{\intercal}) \\ \amp = \det(E_k^{\intercal}) \det(E_{k-2}^{\intercal}) \cdots \det(E_1^{\intercal}) \\ \amp = \det(E_k) \det(E_{k-1}) \cdots \det(E_1) \\ \amp = \det(A) \end{aligned} \end{equation*}
  • Case 2: \(A\) is not invertible
    If \(A\) is not invertible, then \(\det(A)=0\text{.}\) Also, since \(A\) is not invertible, then \(A^{\intercal}\) is also not invertible, so \(\det(A^{\intercal})=0\text{.}\)
The last lemma presented here is related to the determinant of the inverse of a matrix.
The proof of this is left to the reader, but it follows from other lemmas in this section.