3

I tried to use the Simpson's rule to calculate the following integral

$$ \pi = 4 \int_0^1 \frac{dx}{1+x^2} .$$

The Matlab code is as follows. I expected a scaling law $error \propto h^4 $

Nlist = 2.^(2:6);
elist = 0 * Nlist;
for s1 = 1 : length(Nlist)
    N = Nlist(s1);
    h = 1/N;
    xlist = 0 : h/2 : 1;
    ylist = 1 ./ (1 + xlist.^2 );
sum1 =4*h * ( (ylist(1) + ylist(end))*(1/6)...
    + sum(ylist(2:2: end-1))*(4/6)...
    + sum(ylist(3:2 : end-2))* (2/6));

error = abs(sum1 - pi);
elist(s1) = error;

end

plot(log2(Nlist), log2(elist),'*') xlabel('log2(N)') ylabel('log2(error)')

The figure generated is enter image description here

Slope is about $-6$. I expected it to be $-4$. The Simpson's rule works more accurately than expected. I am totally baffled. Is it because of some hidden property of the integrant?

S. Kohn
  • 1,128
  • Not that surprizing: the integrand is close to $y=1-x^2$ on interval [0,1], knowin that Simpson's rule gives exact results for second degree functions (even third degree polynomials : https://math.stackexchange.com/q/698391) – Jean Marie Nov 22 '21 at 11:06
  • 1
    @JeanMarie But for the function $1/(1+x)$, one can find the expected $h^4$ law. – S. Kohn Nov 22 '21 at 11:11
  • Maybe my explanation is a little too simple. One can think there are "kind of" compensations... – Jean Marie Nov 22 '21 at 11:18
  • 1
    For such a smooth integrand there is an asymptotic error expansion. In the literature is given explicitly for the trapezoidal rule and the expansion coefficients can be computed in terms of the derivatives of the integrand and the Bernoulli numbers. Walter Gautschi's book "Numerical analysis" lists the expansion. The expansion transfer to Simpson's rule through Richardson extrapolation. You will be able to deduce the order of the primary error term. – Carl Christian Nov 22 '21 at 12:17

2 Answers2

6

Let us consider the problem of computing the integral $$T = \int_a^b f(x)dx $$ using both the composite trapezoidal rule $T_h$ as well as the Simpson rule $S_h$. If $f \in C^{(2k+1)}([a,b],\mathbb{R})$ then the error satisfies an asymptotic error expansion of the form $$T - T_h = \sum_{j=1}^k \alpha_j h^{2j} + O(h^{2k+1}), \quad h \rightarrow 0, \quad h > 0.$$ The exact value of the expansion coefficients are given by $$\alpha_j = \frac{B_{2j}}{(2j)!}(f^{(2j-1)}(b) - f^{(2j-1)}(a))$$ where $B_{j}$ is the jth Bernoulli number. We need only the first few terms, namely $$B_2 = \frac{1}{6}, \quad B_4 = \frac{1}{30}, \quad B_6 = \frac{1}{42}.$$ In the present case, we have $$f(x) = (1+x^2)^{-1}.$$ It is tedious but straightforward to verify that $$\alpha_1 \not = 0, \quad \alpha_2 = 0, \quad \alpha_3 \not = 0.$$ Hence the error for the trapezoidal rule is $O(h^2)$. Simpson's rule can be obtained from the trapezoidal rule using Richardson extrapolation. Specifically, we have $$S_h = T_h + \frac{T_h - T_{2h}}{3}.$$ Richardson extrapolation eliminates the primary error term from the error expansion and exposes the next term. In this case it is $O(h^6)$ because there never was a term that was $O(h^4)$.

3

The leading order error term for an integration quadrature $Q$ on an interval $[a, b]$ that integrates monomials up to degree $n$ exactly is of the form $C h^{n + 1}(f^{(n)}(b) - f^{(n)}(a))$, where $C$ is a constant independent of $f$ and $h$. This is proven easily as follows.

First consider applying the rule on a small interval $[-h, h]$. Define $E(f) = \int_{-h}^{h}f(x)\,dx - Q(f)$. Note that since integration and $Q$ are linear, $E$ is linear. Expand $f$ as a Taylor series around $0$: $$f(x) = \sum_{j = 0}^{n}\frac{f^{(j)}(0)}{j!}x^j + \frac{f^{(n + 1)}(0)}{(n + 1)!}x^{n + 1} + \frac{f^{(n + 2)}(\theta(x)x)}{(n + 2)!}x^{n + 2}.$$ Apply $E$ to both sides, noting that terms $E(x^j) = 0$ for $j \leq n$ to get $$E(f) = \frac{f^{(n + 1)}(0)}{(n + 1)!}E(x^{n + 1}) + E(\frac{f^{(n + 2)}(\theta(x)x)}{(n + 2)!}x^{n + 2}).$$ Note generally that $E(x^j) = O(h^{j + 1})$. Similarly, as long as $f^{(n + 2)}$ is bounded (which happens if $f \in C^{n + 2}([a, b])$), then $E(\frac{f^{(n + 2)}(\theta(x)x)}{(n + 2)!}x^{n + 2}) = O(h^{n + 3})$. So for the purposes of finding the leading error term, we may drop the term $E(\frac{f^{(n + 2)}(\theta(x)x)}{(n + 2)!}x^{n + 2})$. Thus $$E(f) \approx \frac{f^{(n + 1)}(0)}{(n + 1)!}E(x^{n + 1}) = Cf^{(n + 1)}(0)h^{n + 2}.$$ Now for the error $e(h)$ on the composite method on $[a, b]$ with subintervals of length $2h$, we add the errors on each subinterval $[c_j - h, c_j + h]$ up to get \begin{align} e(h) &\approx \sum_{j = 0}^{N - 1}Cf^{(n + 1)}(c_j)(2h)^{n + 2} \\ &= C(2h)^{n + 1}\sum_{j = 0}^{N - 1}f^{(n + 1)}(c_j)(2h) \\ &\approx C(2h)^{n + 1}\int_{a}^{b}f^{(n + 1)}(x)\,dx \\ &= C(2h)^{n + 1}(f^{(n)}(b) - f^{(n)}(a)). \end{align} Note that in the third equality we recognized the midpoint rule as an approximation of $\int_{a}^{b}f^{(n + 1)}(x)\,dx$.

In your case, we have Simpson's rule, which has $n = 3$. With your function $f(x) = \frac{1}{1 + x^2}$, $f^{(3)}(1) - f^{(3)}(0) = 0$. The explanation for why your order was $6$ is as follows. Carrying out a similar proof to above on the midpoint rule (say to the first two terms), you can see that the midpoint rule error $e_m(h)$ has an "asymptotic expansion" as $$e_m(h) = C_1h^2(f'(b) - f'(a)) + C_2h^4(f^{(3)}(b) - f^{(3)}(a)) + \dots.$$ With this in hand, you can carry out the above proof for Simpson's rule further (say to the first two terms. Note that $E(x^j) = 0$ for odd $j$ due to symmetry), and you will see that the asymptotic error is of the form ($C_1$ and $C_2$ here are different than the ones in the midpoint rule) $$e_s(h) = C_1h^4(f^{(3)}(b) - f^{(3)}(a)) + C_2h^6(f^{(5)}(b) - f^{(5)}(a)) + \dots$$ The exact coefficients $C_j$ in the midpoint expansion can be written in terms of Bernoulli numbers (Euler Maclaurin formula).

Mason
  • 12,787