5

My professor gave a lecture on an orthogonal polynomial based approximation and its advantage over the Taylor series expansion. And his statement was ``in weighted $L_2$ space, Taylor series expansion is not optimal in inner product sense, whereas basis is optimal in the orthogonal polynomial approximation." Roughly I know that Taylor series approximation has some limitations such as function should be analytic or infinitely differentiable. But how Taylor series is not optimal in inner product sense?

Any suggestions towards finding the reason would greatly be appreciated.

  • 1
    Please add some background material regarding weighted $L_2$ spaces , and a reference in a textbook /paper of the statement that Taylor expansions are not optimal. (If your professor said it in class, it is ok, mention that). Your question has poor reception because you have neglected background. Once you provide details, your reception will go up. – Sarvesh Ravichandran Iyer Feb 15 '21 at 06:27
  • Thanks, @TeresaLisbon, for your suggestions. Now I have provided the specific background of the concerned problem. – Delayed signal Feb 15 '21 at 09:12
  • 1
    Appreciated. I understand far more your concern now. I think people will want to discuss with you. – Sarvesh Ravichandran Iyer Feb 15 '21 at 09:22
  • 3
    The Taylor series is designed to be optimal near a single point, and this comes at the cost of possibly being less accurate further away from the point. If you use $L^2$, you try to make the approximation good over an entire region, but possibly not as good at any given point. – Nick Alger Feb 15 '21 at 09:48
  • Thanks, @NickAlger, Now I think I am getting something. Can you suggest some references, where I can find more about the above discussion? – Delayed signal Feb 15 '21 at 10:24
  • @KundanKumar I don't know any references. It pretty much follows directly from the definitions. – Nick Alger Feb 16 '21 at 05:36

2 Answers2

3

Let us call the weighted $L^2$ space $W$, assume all the polynomials belong to $W$ and let $\Pi_n$ denote the subspace of polynomials of degree $n$ or less.

The essential point to note is that any $p \in \Pi_n$ is the closest approximation for $f \in W$ (in the sense of the $W$ norm) if and only if $f-p$ is orthogonal $\Pi_n$. If $p_0, p_1, \cdots, p_n, \cdots$ are the $W$ orthogonal polynomials each with unit norm, then the $n$th order approximation \begin{align*} f_n = \sum_{k=0}^n \langle f, p_k \rangle p_k \end{align*} is optimal because $p_1, \cdots p_n$ spans $\Pi_n$ and $f-f_n$ is easily verified to be in $\Pi_n^\perp$. Nor is it hard to see that such an expansion is unique. In particular the $n$th degree Taylor series cannot be any better. Note it might be no worse either, an obvious case being when $f$ is itself a polynomial of degree less than or equal to $n$.

The key result is: if $H$ is an inner product space, $x \in H$ and $V$ a subspace of $H$, then $x \in V^\perp$ if and only if $$ \lVert v-x \lVert \geqslant \lVert x \rVert$$ for every $v \in V$. To prove it, first, assume $x \in V^\perp$, then for any $v \in V$, \begin{align*} \lVert v-x\rVert^2 &= \langle v-x, v-x\rangle \\ &= \lVert v \rVert^2 + \lVert x \rVert^2 \\ &\geqslant \lVert x \rVert^2 \end{align*} Conversely if $\lVert v -x \rVert \geqslant \lVert x \rVert$ for all $v \in V$ then for any $u \in V$ and $\alpha \in \mathbb C$, we also have $\alpha u \in V$ and \begin{align*} \lVert x \rVert ^2 &\leqslant \lVert x - \alpha u \rVert ^2 \\ &= \lVert x \rVert x ^2 - \alpha \langle u, x \rangle - \overline{\alpha \langle u, x \rangle} +|\alpha|^2 \lVert u \rVert^2 \\ &= \lVert x \rVert^2 - 2 \Re \Big( \alpha \langle u, x\rangle \Big) + |\alpha|^2 \lVert u \rVert ^2 \tag{1}\label{BPA-1} \end{align*} Now choose $\theta$ so that $e^{i\theta}\langle u, x \rangle $ is real and positive and for any $r > 0$ let $\alpha = re^{i\theta}$. Then cancel $\lVert x \rVert^2$ on each side of inequality \eqref{BPA-1}, then divide by $r > 0$, to get \begin{align*} 2 \lvert \langle u, x \rangle \rvert \leqslant r \lVert u \rVert^2. \end{align*} Now $r$ is arbitrary, so we must have $\langle u, x \rangle = 0$ and $u \in V^\perp$.

WA Don
  • 4,598
3

As a simple explanation why: The Taylor polynomial gives you the best possible approximation at one single point, and the further away you are from this point, the worse the approximation. The Taylor polynomial doesn’t even try to keep the error low when you are away from that point.

Polynomials that find an approximation minimising some norm tend to keep the error down over a whole range of values. They have to, since the errors over the whole range contribute to the error norm that is minimised.

Interpolating polynomials can be trouble if you don’t watch out at which points you interpolate. A method that is mostly numerical is minimising the maximum error over an interval; Chebyshev has a nice theorem for that. If you do numerical mathematics and want the highest possible numerical precision, you minimise the sum of polynomial error and rounding error. And you take into account that your polynomial coefficients should be floating point numbers.

gnasher729
  • 10,611
  • 20
  • 38