1

My textbook claims that, for small step size $h$, Euler's method has a global error which is at most proportional to $h$ such that error $= C_1h$. It is then claimed that $C_1$ depends on the initial value problem, but no explanation is given as to how one finds $C_1$.

So if I know $h$, then how can I deduce $C_1$ from the IVP?

The Pointer
  • 4,686

2 Answers2

4

Given an IVP: $$\frac{dy}{dt}=f(t,y), y(a)=y_0, t\in [a,b].$$ Here is a Theorem from Numerical Analysis by Sauer:

Assume that $f(t,y)$ has a Lipschitz constant $L$ for the variable $y$ and that the solution $y_i$ of the initial value problem at $t_i$ is approximated by $w_i$, using Euler's method. Let $M$ be an upper bound for $|y''(t)|$ on $[a,b]$. Then $$|w_i-y_i|\le \frac{Mh}{2L}(e^{L(t_i-a)}-1).$$

The proof is based on the following lemma:

Assume that $f(t,y)$ is Lipschitz in the variable $y$ on the set $S=[a,b]\times [\alpha,\beta]$. If $Y(t)$ and $Z(t)$ are solutions in $S$ of the differential equation $y'=f(t,y)$ with initial conditions $Y(a)$ and $Z(a)$ respectively, then $$|Y(t)-Z(t)|\le e^{L(t-a)}|Y(a)-Z(a)|.$$

Sketch of proof of the first theorem:

Let $g_i$ be the global error, $e_i$ be the local truncation error, $z_i$ satisfy the local IVP: $$z_i'=f(t,z_i),z_i(t_i)=w_i, t\in [t_i,t_{i+1}].$$

Then $$g_i=|w_i-y_i|=|w_i-z_i(t)+z_i(t)-y_i|\le |w_i-z_i(t)|+|z_i(t)-y_i|\\ \le e_i+e^{Lh}g_{i-1}\\ \le e_i+e^{Lh}(e_{i-1}+e^{Lh}g_{i-2})\le \cdots\\ \le e_i+e^{Lh}e_{i-1}+e^{2Lh}e_{i-2}+\cdots +e^{(i-1)Lh}e_1.$$ Since each $e_i\le \frac{h^2M}{2}$, we have $$g_i\le \frac{h^2M}{2}(1+e^{Lh}+\cdots+e^{(i-1)Lh})=\frac{h^2M(e^{iLh}-1)}{2(e^{Lh}-1)}\le \frac{Mh}{2L}(e^{L(t_i-a)}-1).$$

Hope this helps.

KittyL
  • 17,275
0

Consider the IVP $y'=f(t,y)$, $y(t_0)=y_0$. Let $t_k=t_0+kh$ define the time span and $y_k$ the numerical solution computed by the Euler method $$ y_{k+1}=y_k+hf(t_k,y_k). $$ We know that the error order of the Euler method is one. Thus the iterates $y_k$ have an error relative to the exact solution $y(t_k)$ of the form $$y_k=y(t_k)+c_kh$$ with some coefficients $c_k$ that will be closer determined in the course of this answer.


Now insert this representation of $y_k$ into the Euler step and apply Taylor expansion where appropriate \begin{align} [y(t_{k+1})+c_{k+1}h]&=[y(t_k)+c_kh]+hf(t_k,[y(t_k)+c_kh])\\ &=y(t_k)+c_kh+h\Bigl(f(t_k,y(t_k))+h\,∂_yf(t_k,y(t_k))\,c_k+O(h^2)\Bigr)\\ &=y(t_k)+hy'(t_k)+h\Bigl[c_k+h\,∂_yf(t_k,y(t_k))\,c_k\Bigr]+O(h^3)\\ y(t_k+h)&=y(t_k)+hy'(t_k)+\tfrac12h^2y''(t_k)+O(h^3) \end{align} where $∂_y=\frac{\partial}{\partial y}$ and later $∂_t=\frac{\partial}{\partial t}$.


Replacing $y(t_{k+1})$ in the first equation with the Taylor series for $y(t_k+h)$ as in the last line, the first two terms on the left side cancel against the same terms on the right side. The remaining second-order derivative can be further written as $$ y''(t)=\frac{d}{dt}f(t,y(t)) =∂_tf(t,y(t))+∂_yf(t,y(t))\,f(t,y(t)) \overset{\rm Def}=Df(t,y(t)). $$ Now cancel the common factor $h$ and re-arrange to get a difference equation for $c_k$ $$ c_{k+1}=c_k+h\Bigl[∂_yf(t_k,y(t_k))c_k-\tfrac12Df(t_k,y(t_k))\Bigr]+O(h^2). $$


This looks like the Euler method for the linear ODE for a continuous differentiable function $c$, $$ c'(t)=∂_yf(t,y(t))c(t)-\tfrac12Df(t,y(t)),~~\text{ with }~~c(t_0)=0. $$ Again by the first order of the Euler method, $c_k$ and $c(t_k)$ will have a difference $O(h)$, so that the error we aim to estimate is $$y_k-y(t_k)=c(t_k)h+O(h^2).$$


Now if $L$ is a bound for $∂_yf$, the $y$-Lipschitz constant, and $M$ is a bound for $Df=∂_tf+∂_yf\,f$, or the second derivative, then by Grönwall's lemma $$ \|c'\|\le L\|c\|+\frac12M\implies \|c(t)\|\le \frac{M(e^{L|t-t_0|}-1)}{2L} $$ which reproduces the usual specific estimate of the coefficient of the error term.

Lutz Lehmann
  • 131,652