Although the number of operations needed to solve linear systems seem large, if we compare this to another standard method of solution, Cramer’s rule, we will see that Gaussian Elimination is quite efficient.
Recall that Cramer’s Rule for solving the linear system \(\mathbf{Ax}=\mathbf{b}\) for a square matrix \(\mathbf{A}\) is that
\begin{equation*}
\begin{aligned}x_{j}\amp = \frac{1}{|\mathbf{A}|}\begin{vmatrix}a_{1,1}\amp a_{1,2}\amp \cdots\amp b_{1}\amp \cdots\amp a_{1,n}\\ a_{2,1}\amp a_{2,2}\amp \cdots\amp b_{2}\amp \cdots\amp a_{2,n}\\ \vdots\amp \amp \vdots\amp \amp \vdots \\ a_{n,1}\amp a_{n,2}\amp \cdots\amp b_{n}\amp \cdots\amp a_{n,n}\\\end{vmatrix}\end{aligned}
\end{equation*}
where the vector \(\mathbf{b}\) has be placed in the determine in the \(j\)th column.
To determine the number of operations for Cramer’s Rule, first, we need to know the number of operations to find a determinant. If \(\mathbf{A}\) is a 2 by 2 matrix, then there are \(2\) multiplications and 1 addition. For other determinants, note that the cofactor expansion method is often used, which is
\begin{equation*}
\begin{aligned}|A|\amp = a_{1,1}M_{1,1}- a_{1,2}M_{1,2}+ a_{1,3}M_{1,3}+ \cdots + (-1)^{n} a_{1,n}M_{1,n}\\\amp = \sum_{k=1}^{n} (-1)^{k} a_{1,k}M_{1,k}\end{aligned}
\end{equation*}
where \(M_{i,j}\) is the minor (that is the determinant of the matrix with the \(i\)th row and \(j\)th column removed). If we let \(g(n)\) be the number of multiplications needed for the determinant, then
\begin{equation*}
\begin{aligned}g(n)\amp = 1 \cdot g(n-1)\end{aligned}
\end{equation*}
s and we know that \(g(1) = 1\text{,}\) This is the definition of the factorial, so \(g(n)=n!\text{.}\)
The number of operations for Cramer’s rule of a matrix of size
\(n\) by
\(n\) can be found by noting that
\(n+1\) determinants are needed. Thus Cramer’s rule requires
\((n+1)n!=(n+1)!\) multiplications, which is much larger than
\(O(n^{3})\text{.}\)