0

Why is pivoting important in Gaussian elimination, and how significant are rounding errors with modern calculators?

Pivoting in Gaussian elimination helps improve numerical stability by choosing the largest pivot element, which reduces rounding errors. With modern calculators providing high precision, are rounding errors still a concern? Could you provide a simple example where not using partial pivoting results in significant errors?

Thank you!

Celestina
  • 1,351
  • 8
  • 22
  • 1
    It is quite normal that textbooks include such an example. If your book hasn't got an example, then give another one a try. If nothing else, it can be useful to see different takes on the same matter. – Carl Christian Aug 29 '24 at 07:43

1 Answers1

1

There are many reasons why pivoting is important for Gaussian Elimination. Examples include:

  • Gaussian elimination fails if any of the pivots is zero.

  • Gaussian elimination fails and is worse than the previous statement if any pivot becomes close to zero.

  • Gaussian elimination without pivoting can fail even if the matrix is nonsingular.

  • GE with Pivoting helps reduce rounding errors; you are less likely to add/subtract with very small numbers (or very large) numbers.

Example 1: Use GE without pivoting

$$ \left[\begin{array}{rrrr|r} 2 & 4 & -2 & -2 & -4 \\ 6 & 2 & 4 & -3 & 5 \\ -3 & -3 & 8 & -2 & 7 \\ -1 & 1 & 6 & -3 & 7 \end{array}\right] $$

Note that there is nothing ”wrong” with this system. $A$ is full rank. The solution exists and is unique. The second stage of Gaussian elimination will not work because there is a zero in the pivot location, $a_{22}$.

Example 2: This matrix is singular and do GE without pivoting.

$$ \left[\begin{array}{rrr|r} 1 & -1 & 1 & 3\\ 2 & -2 & 4 & 8\\ 3 & -3 & -9 & 0 \end{array}\right] $$

Example 3: The prototypical example of numerical instability

$$ \left[\begin{array}{rr|r} \epsilon & 1 & 1 \\ 1 & 1 & 2 \\ \end{array}\right] $$

With naïve GE, we have

$$ \left[\begin{array}{rr|r} \epsilon & 1 & 1 \\ 0 & \left(1 - \dfrac{1}{\epsilon}\right) & 2- \dfrac{1}{\epsilon} \\ \end{array}\right] $$

Solving for $x$ and $y$ using exact arithmetic

$$\begin{align}y &= \dfrac{2 - \frac{1}{\epsilon}}{1 - \frac{1}{\epsilon}} = 1 + \dfrac{\epsilon}{\epsilon - 1} \approx 1 - \epsilon \\x &= \dfrac{1-y}{\epsilon} = \dfrac{-1}{\epsilon-1} = 1-\dfrac{\epsilon}{\epsilon-1} \approx 1 + \epsilon \end{align}$$

However using finite precision floating point arithmetic for $\epsilon \approx 10^{-20}$, $x \approx 0, y \approx 1$.

The stability of Gaussian elimination algorithms is better understood by measuring the growth of the elements in the reduced matrices.

Note: Based on one of your observations, professional software and calculators often have routines that can account for these sorts of corner cases and have the ability to compensate and get correct results.

Moo
  • 12,294
  • 5
  • 20
  • 32