0

I know only Maxima (of course to little extent). Using whatever I knew, I tried to find the approximate values of known function say $e^x$ using Newton-Gregory forward and backward interpolation formulas. I am facing a problem which I am not sure how to address. Not one, two problems in fact.

I was taught and many texts teach the same thing without proper rationale that for the data near the beginning of the table, we use forward interpolation formula and for the data towards the end of the table, we use backward one. I chose $f(x) = e^x$ to study what would happen otherwise. I faced a new problem while doing this. I chose the interval $[0,5]$ and if $n$ denoted the number of equally spaced intervals into which $[0,5]$ was divided, I applied Newton-Forward interpolation formula for various values of $n$ from 1 to 100 and recorded the corresponding approximate values of $f$ at $x=0.1$. As far as my guess is right, with the increase in value of $n$, theoretically we should have approached the true value of $f$. However, I see a huge variation in the values of $f$ as $n$ increases. Of course, I know that whatever the data I have with me is not sufficient to conclude that as $n\to \infty$, the approximate value approaches exact value. Yet, I feel something is not right with the data. It behaves perfectly well till $n=54$ and thereafter till $n=99$ something unusual happens and finally for $n=100$, the curve fits the exact value. Is there anything wrong? The command I used is the following oneCode

The output was as follows: Output

$\large{\text{What are possible explanations for this?}}$

Secondly, my main search was, why we have two different formulas when the polynomials which they (forward and backward interpolation formulas) represent are same? I searched for an answer to this and ended up in one explanation which ascribed the use two different formulas to infinite arithmetic. But I could not grasp the argument properly. It would be helpful if someone sheds some light on this too. The site I visited was the following one:

Newton's Interpolation Formula: Difference between the forward and the backward formula

Yathi
  • 2,859
  • This is likely about the limitations of the floating-point format and catastrophic cancellation. The Lagrange factors will grow rapidly with increasing density of sampling points. – Lutz Lehmann Jan 18 '23 at 20:28
  • Can you please elaborate? Is it the limitation of the computer or is there something happening behind? – Yathi Jan 19 '23 at 02:31
  • You are computing a sum of rather large numbers where the result is a relatively small number. This results in catastrophic cancellation. If you use a number type with higher precision, you get correct results for a higher sampling density. – Lutz Lehmann Jan 19 '23 at 06:50
  • Is there any reference or material to build more understanding on what you are describing? I am sorry, I am getting the meaning of what you are writing. But the terminologies which you are using may be standard ones with deeper meanings and theories behind them. – Yathi Jan 19 '23 at 10:23
  • Any book on scientific computing should handle these topics, how the result of summation depends on the ordering of the terms, what errors can and can not be avoided, the bad behavior of Lagrange interpolation for higher degrees. Computing divided differences and using the Newton interpolation formula with the increasing node sequence might also be useful. – Lutz Lehmann Jan 19 '23 at 12:45
  • Thank you for the inputs. Will go through the books. Will come back if any clarification is required. – Yathi Jan 19 '23 at 14:44

0 Answers0