0

Could anyone explain using an example why an ill-conditioned problem would not be solvable with much accuracy when using floating-point arithmetic to compute an approximate solution?

  • I would like to know why this question has received a close vote. I am aware of exactly one text for undergraduates where this question is answered adequately, so this information is not easy to come by. – Carl Christian Oct 30 '20 at 00:14
  • 1
    @CarlChristian I'm not currently a close-voter on this question, but it does not follow the community guidelines about context very well. – KReiser Oct 30 '20 at 03:10
  • @KReiser. I am grateful for your response and for your attempt to to explain the issue to me. At first glance you are right, but on the second reading I find the judgement harsh. To be sure there is no personal work included and the OP's background must be deduced from OP's few past questions. On the other hand, the context is clear to a numerical analyst. The question concerns the limits of finite precision arithmetic, so the motivation and relevance is clear. The question is faced by all students of numerical analysis so no specific reference is needed. – Carl Christian Oct 30 '20 at 10:00
  • I recognize that this is subjective, but I am inclined to forgive the omission of any personal work for a question such as this. I have seen too many texts that rush through the difficult questions of the conditioning of problems and the stability of algorithms. I can easily imagine that OP's text is no exception. – Carl Christian Oct 30 '20 at 10:07

1 Answers1

1

Consider the common problem of computing $y = f(x)$ where $f :\mathbb{R} \rightarrow \mathbb{R}$ is a differentiable function. If the algorithm is relative backward stable, then the computed value $\hat{y}$ of $y$ satisfies $$ \hat{y} = f(\hat{x})$$ where $$\left|\frac{x-\hat{x}}{x}\right| \leq C u.$$ Here $u$ is the unit round off and $C>0$ is a constant independent of $u$. A good algorithm has a small value of $C$. This is as good as it gets. Now if the problem is ill-conditioned, then small changes in the input can cause large changes in the output. Specifically, if $\bar{x}$ is an approximation of $x$, then we cannot hope to do better than $$\left| \frac{f(x)-f(\bar{x})}{f(x)} \right| \approx \kappa_f(x) \left|\frac{x-\bar{x}}{x}\right|,$$ where $ \kappa_f(x)$ is the relative condition number of $f$ at the point $x$ given by $$\kappa_f(x) = \left| \frac{xf'(x)}{f(x)} \right|.$$ A rigorous deriviation of this relation from an abstract definition of the condition number can be found in this answer to a related question.

In particular, we have the following bound for the forward relative error $$\left| \frac{ y - \hat{y} }{y} \right| = \left| \frac{f(x)-f(\hat{x})}{f(x)} \right| \approx \kappa_f(x) \left|\frac{x-\hat{x}}{x}\right| \leq C \kappa_f(x) u.$$ In summary, the best we can hope for is a small relative backward error, but this is not enough to guarantee a small relative forward error when the problem is ill-conditioned, i.e., when $\kappa_f(x)$ is large relative to $u$. Conversely, if $C\kappa_f(x)u$ is tiny then all is well and the forward relative error is always small.

  • Thank you so much for this detailed explanation. I had been struggling for quite some time to find an explanation to this question of mine. –  Oct 30 '20 at 06:28