Gaussian elimination is the name of an algorithm which uses a sequence of elementary row operations to solve a system of linear equations. In the post on systems of linear equations, we introduced systems of linear equations, the matrix of coefficients, and the augmented matrix. Here we first introduce the back-substitution method to solve special kinds of systems of linear equations. Then, we show how to apply the Gaussian elimination algorithm to convert the augmented matrix of a system to a matrix suitable for the back-substitution method.

#### Back-substitution procedure

We start our discussion with an example. Consider the following system of linear equations:

$$ \begin{array}{lr}

& 4x & – & 3y & + & z & = & -10\\

& & & y & + & 3z & = & 0\\ & & & & & -5z & = & 15

\end{array} $$ From the third equation, we see that \(z = -3\). By substituting the value of \(z\) in the second equation, we obtain that \(y = 9\), and finally by substituting the values of \(y\) and \(z\) in the first equation, we find \(x\) which is \(5\). Therefore, the solution set of the above system is $$(x,y,z) = (5,9,-3).$$ The simple process explained above is called a back-substitution procedure.

It is obvious that the matrix of the coefficients of the given system is as follows: $$\begin{pmatrix} 4 & -3 & 1 \\ 0 & 1 & 3 \\ 0 & 0 & -5 \end{pmatrix},$$ which is an example of a an upper-triangular matrix.

Note that by definition, a square matrix is upper-triangular if all entries below the main diagonal is zero. In other words, a square matrix \(A = (a_{ij})\) is upper-triangular if \(j < i\) implies that \(a_{ij} = 0\). Triangular matrices are special kinds of square matrices. For more, see special matrices and vectors.

In the post on systems of linear equations, we explained that two linear systems are “equivalent” if they have the same solution set. Now, if we can find a way to convert a system of linear equations with \(n\) equations and \(n\) variables to a new equivalent system such that its matrix of coefficients is upper-triangular, then we are able to solve the general systems of linear equations with \(n\) equations and \(n\) variables. As a matter of fact, Gaussian elimination is an algorithm that does this job for us as we explain it in the next section.

#### Solving systems of linear equations with the help of Gaussian elimination algorithm

We illustrate the Gaussian elimination method on the following example: $$ \begin{array}{lr}

& 2x & + & y & + & z & = & 5\\

& 4x & – & 6y & & & = & -2\\ & -2x & + & 7y & + & 2z & = & 9

\end{array} $$ We can eliminate \(x\) from the second and third equation by subtracting twice the first equation from the second and replace it by the second and adding the third to the first and replace it by the third. So, we obtain the following new equivalent system: $$ \begin{array}{lr}

& 2x & + & y & + & z & = & 5\\

& & – & 8y & – & 2z & = & -12\\ & & & 8y & + & 3z & = & 14

\end{array} $$

If we name the first, second, and third equation by \(E_1\), \(E_2\), and \(E_3\), respectively, we can abbreviate what we did as follows: $$E_2 – 2 E_1 \rightarrow E^{\prime}_2, $$ $$E_3 + E_1 \rightarrow E^{\prime}_3.$$

Finally, we can eliminate \(y\) from the third equation by adding the third equation to the second and replace it by the third, i.e., $$ E_3 + E_2 \rightarrow E^{\prime}_3,$$ and again, we obtain the following new equivalent system in this form that its matrix of coefficients is upper-triangular: $$ \begin{array}{lr}

& 2x & + & y & + & z & = & 5\\

& & – & 8y & – & 2z & = & -12\\ & & & & & z & = & 2

\end{array} $$ Now, by back-substitution procedure, we obtain the solution set: $$(x,y,z) = (1,1,2).$$

#### Elementary row operations in Gaussian elimination method

In order to solve the above linear system, as we have already seen, we have only done a kind of row operation. This operation is by adding a row \(R_i\) to a multiple of another row \(R_j\) and replace it to the row \(R^{\prime}_i\) in the new equivalent system, i.e., $$R_i + a R_j \rightarrow R^{\prime}_i.$$

Sometimes it is necessary to permute the rows. For example, in the following system $$ \begin{array}{lr}

& 2x & + & y & + & z & = & 5\\ & & & & & z & = & 2 \\

& & – & 8y & – & 2z & = & -12

\end{array}, $$ the variable \(y\) does not appear in the second equation and we cannot eliminate it from the third equation with the help of the second. The solution, however, is simple: permute the second and third equations. We see that another necessary row operation is the permutation operation of the rows. In the example of the section on “back-substitution procedure”, the last equation is \(-5z = 15\). If we multiply the last equation by \(-1/5\), we reach to \(z = -3\). This is called the scaling row operation.

Therefore, in many standard books on linear algebra, there are three elementary row operations:

- (Replacement) Add to the row \(R_i\) a multiple of another row \(R_j\) and replace by \(R_i\)
- (Permutation) Permute one row with another row
- (Scaling) Multiply a row by a nonzero constant

**Exercise**. Prove that each elementary row operation is a reversible operation.

By definition, a matrix \(B\) is “row equivalent” to a matrix \(A\) is a sequence of elementary row operations that transforms \(A\) into \(B\).

**Exercise**. Use previous exercise to show that “row equivalent” is an equivalent relation.

For applications of elementary row operations see the post on elementary matrix.