0

I got to read about Linearization and Quadratic approximation and in general approximation theory. From what I observed from the examples discussed there, it seems like the approximation works only for points closer to the known point x = a. For points farther away from that point x=a, we may get more error in the approximation so it is not preferable.

Question 1:

Is my understanding about region of approximation correct ?

Question 2:

If we extend the degree of the approximation terms in taylor series to reasonable n, what is exactly happening ? how does the accuracy increase ? what if there are many bumps in curve near the point of approximation ? Can someone give an example for a complex function and show ?

Question 3:

If the above approximation method just approximates the region surrounding a point, i would like to know if there is any way to determine a approximation of the entire function (not just region close to desired point) and get a decent error ? if so what is it ?

Is this technique (answer of above part in question 3) is the one used in machine learning(in computer science) in regression to predict the new output when we already know a set of old inputs and outputs ?

Harish Kayarohanam
  • 1,980
  • 2
  • 16
  • 25
  • Does https://math.stackexchange.com/questions/1308992/why-doesnt-a-taylor-series-converge-always/1309167 answer your first two questions? I can write up an answer to just #3 if so. – Ian Jun 08 '17 at 00:16
  • Can you give a explanation pointed to this question ? I read the link u sent but it appears a bit tangential to what I ask or may be I could not understand. @lan thanks for coming forward to clear by doubt. – Harish Kayarohanam Jun 08 '17 at 00:22
  • Do you have some animation or figure to show how it behaves as degree grows ? – Harish Kayarohanam Jun 08 '17 at 00:23
  • Basically, for an arbitrary smooth function, the Taylor approximation at $x=a$ of degree $n$ becomes more and more accurate on an interval $I_n$ centered at $a$, but $I_n$ may shrink as $n$ grows depending on the behavior of the derivatives. Non-Taylor methods, such as polynomial interpolation, can avoid this problem. – Ian Jun 08 '17 at 00:25
  • understood.. thx. i also reread that link u passed.. so please as u said go ahead answering the other question. – Harish Kayarohanam Jun 08 '17 at 00:26

1 Answers1

2

Taylor methods, except for analytic functions, don't give a good approximation on the entire domain. In fact their accuracy may only improve on a shrinking sequence of intervals. This shrinking occurs for the classic example $f(x)=\begin{cases} 0 & x=0 \\ e^{-1/x^2} & \text{otherwise} \end{cases}$ for Taylor approximants at $x=0$ (which are all just zero).

There are non-Taylor methods for approximating functions. The "holy grail" is the minimax polynomial of a given degree for a function on an interval, which minimizes the maximum error on the interval. This usually cannot be calculated. A more feasible but of course less accurate alternative is to construct an interpolating polynomial at appropriate evaluation points. A good choice of such points is the Chebyshev nodes; a bad choice of such points (for non-analytic functions) are evenly spaced nodes. (To see the reason, look up the Runge phenomenon.)

One can also use non-interpolatory methods based on minimizing some error functional (usually least squares type error). This is the most common choice in machine learning (where we usually have very high-dimensional data with relatively low-dimensional behavior). The difficulty is coming up with the structure of an appropriate model; merely finding the parameters of the model is straightforward even though it is sometimes laborious.

Ian
  • 104,572