0

I have recently implemented a function for Euler's method and I am trying to find some information about the rate of convergence for it, though I am failing to understand it.

So far I have a set of errors, calculated by doing the absolute value of the correct value of the function on a given point minus its approximate value, over four different iterations where I changed the number of points in each one.

By getting the maximum value of all the errors of each iteration, I have:

Max. error (iteration 1: $h=\frac{1}{10}$): $2.793046047414873$

Max. error (iteration 2: $h=\frac{1}{20}$): $0.9388074952519254$

Max. error (iteration 3: $h=\frac{1}{40}$): $0.3136467973639547$

Max. error (iteration 4: $h=\frac{1}{80}$): $0.1367335060577979$

Now, I should be able to get the rate of convergence by calculating:

$|max. error|\leq Ch^2$, where $C$ is a constant independent of $h$

Which...I don't understand at all! Browsing Wikipedia articles and other sites give me different formulae which are even more confusing, so I am not sure what to do when I actually have all the data available.

Lightsong
  • 216
  • Can you say more about the function that you're testing it on? – cmk May 25 '19 at 19:29
  • It's $\frac{-16y}{t+1}$$-2y^2=y'$ , with an exact solution of $\frac{860934420}{(13t^16+208t^15+1560t^14+7280t^13+23660t^12+56784t^11+104104t^10+148720t^9+167310t^8+148720t^7+104104t^6+56784t^5+23660t^4+7280t^3+1560t^2-114791048t-114791243)}$ – Lightsong May 25 '19 at 19:31
  • @cmk : You surely mean that the other way around, the local truncation error order is 2, while the global error error order is 1. – Lutz Lehmann May 26 '19 at 10:15
  • What is your progression of step sizes? I see a very nice reduction by a factor of $3$ from line to line in your data, which would be consistent with step sizes $h_k=3^{-k}h_0$. – Lutz Lehmann May 26 '19 at 10:25
  • How did you get your exact solution? With the initial value $y(0)=-\frac{15}2$ and the usual Bernoulli process I get $y(t)=-\frac{15}{2(t+1)}$ as solution, which looks a mite more compact. Or was the perturbation to $y(0)=-7.500000849367926$ intentional? – Lutz Lehmann May 26 '19 at 10:36
  • @LutzL Mind that I am completely new to all this :( , we just started to "code" numerical methods but our teacher has not explained a single bit about rates/orders of convergence, and he just gave us an assignment with these questions. I see that the errors decrease by a factor of 3 but I do not actually understand the concept of the order of convergence. Is it just 3? As for the exact solution, this is intentionally pulled from Maxima by using its internal ode2 function without any simplifications, which indeed seems to use Bernoulli (though I haven't touched that method nor we "need" to) – Lightsong May 26 '19 at 10:54
  • @Lightsong I mixed up the order of the terms by accident, thanks for the heads up. Here’s what I meant: Euler's method is a 1st order method globally. What you might’ve seen is the local truncation error. Some books define it in different ways, so depending on the definition, this is either 1st or 2nd order. – cmk May 26 '19 at 12:39
  • The point-wise errors can be approximated by polynomials in $h$, but the maximum over the interval does not need to be a polynomial. Nevertheless, if one assumes that the error is approximately polynomial, and that it has a strong quadratic term after the leading linear one, like in $h+100h^2$, then for largish $h$ one would see a quadratic behavior transitioning to linear. For $h_k=2^{-k}$ this gives the sequence $101.00000, 25.50000, 6.50000, 1.68750, 0.45312, 0.12891, 0.04004, 0.01391, 0.00543, 0.00233, 0.00107, 0.00051, 0.00025$, where the quotients change from 4 to about 3 to 2. – Lutz Lehmann May 27 '19 at 08:46
  • To conclude, what your progression of error values shows strongly depends on the progression of step sizes. If they were dyadic, $h=h_02^{-k}$, then an error $e_k=e_03^{-k}=e_0(2^{-k})^{\log_23}=e_0(h_k/h_0)^{\log_23}$ gives the (not existing for ODE solvers) error order $\log_23$. If the step sizes are $h_k=h_03^{-k}$, then the same error $e_k=e_03^{-k}=e_0(h_k/h_0)^1$ gives a clear first order behavior. For too large or too small step sizes you might get error progressions that do not reflect the correct error order. – Lutz Lehmann May 27 '19 at 08:54

1 Answers1

2

The Euler method has the general error bound $$ e(t,h)\le \frac{M}{2L}(e^{Lt}-1)h $$ where $M$ is a bound on the second derivative of the solution or on the term $(f_t+f_xf)$ for the ODE function $f$ in the region that the solution curve lies in. $L$ is a Lipschitz constant of $f$ in the same region. For $t=h$ this reduces to the local quadratic truncation error $Mh^2$. This is a more specific statement of the general fact that the local truncation error order of the Euler method is 2, while the global error order is 1.


Your test problem $$ y'=-2y^2-\frac{16y}{t+1}, ~~ y(2)=4, ~~ t\in[2,3], ~~ h=0.1\cdot 2^{-k} $$ is a Bernoulli equation, which class has the general form $$y'(t)=P(t)y(t)+Q(t)y^n.$$ The general solution method is to set $u=y^{1-n}$ to obtain a linear first order ODE. Here set $u=y^{-1}$, then $$ u'=-y^{-2}y'=2 + \frac{16u}{t+1}\iff\left(\frac{u}{(t+1)^{16}}\right)'=\frac2{(t+1)^{16}} \\ \implies \frac{u(t)}{(t+1)^{16}}-\frac{u(2)}{3^{16}}=\frac2{15}\left(\frac1{3^{15}}-\frac{1}{(t+1)^{15}}\right) $$ Returning to the original equation one gets $$ y(t)=-\frac{15}{2(t+1)(1-ϵ(t+1)^{15})} $$ with $ϵ=\frac1{3^{15}}\left(\frac{5}{2y(2)}+1\right)=\frac1{3^{15}}\frac{13}8$ with the initial condition $y(2)=4$.

The exact solution is convex falling to zero, initially rapidly with $y'(2)=-\frac{160}3$, the tangents will lie below the curve, thus the Euler method will deviate from the exact solution towards the $t$-axis, possibly overshooting the constant solution $y=0$. The high degree in the "perturbation" term $ϵ(t+1)^{15}$ will dominate the long-term behavior of solution and error. While initially the error will follow the exponential bound, for longer times it will behave like $t^{-16}$.

Performing the method with the given data results in the error table

h=1.000000e-01,  err=2.7930460474e+00,  err/h=27.930460
h=5.000000e-02,  err=9.3880749525e-01,  err/h=18.776150
h=2.500000e-02,  err=3.1364679736e-01,  err/h=12.545872
h=1.250000e-02,  err=1.3673350606e-01,  err/h=10.938680
h=6.250000e-03,  err=6.4772632372e-02,  err/h=10.363621
h=3.125000e-03,  err=3.1568508260e-02,  err/h=10.101923
h=1.562500e-03,  err=1.5594241728e-02,  err/h=9.980315
h=7.812500e-04,  err=7.7506759066e-03,  err/h=9.920865
h=3.906250e-04,  err=3.8638549926e-03,  err/h=9.891469
h=1.953125e-04,  err=1.9290725518e-03,  err/h=9.876851
h=9.765625e-05,  err=9.6382466361e-04,  err/h=9.869565

The first 5 lines for larger step sizes to indeed not exhibit typical first order behavior, the error quotients are close to 3. The asymptotic behavior of a first order method only manifests in the following lines that do not occur in the experiments in the question. There the error quotient between consecutive lines comes close to $2$ (or $0.5$ depending on the reading direction). As the dyadic power that one suspects inside the error is also contained in the step size. That the quotient of error and step size becomes constant around $9.85$ also shows compatibility with the global error order $1$.

Plotting the resulting solution curves on the left and the error divided by the step size $h$, that is, $e(t,h)/h$, on the right gives the following diagram.

graphs and errors

That the graphs on the right converge visually to one curve $c_1(t)$ shows that the error order is indeed 1.

Lutz Lehmann
  • 131,652
  • I'm sorry but I fail to completely to understand it. In my case, I should "guess" the rate of convergence from the set of maximum local errors (the one in the original question). For example, in another numerical method (Central differences for linear second-order ODE), he makes a quotient between the first and last maximum error which gives approx. $4$, and then says that it's expected since the method in question is of order $2$ – Lightsong May 26 '19 at 19:35
  • You are still missing the second data column. Without the step size or number of steps in the fixed interval to relate to, your values could mean anything. Please also add the integration interval. // Also, your problem is non-linear and stiff, so that the error at larger step sizes is not very indicative for the error order. As is often the case for stiff DE, there might be no "sweet spot" where the observed error behaves like the leading term, as for small step sizes the accumulated floating point errors start to dominate. – Lutz Lehmann May 26 '19 at 20:03
  • I added the values for $h$ now, the interval is $[2,3]$ – Lightsong May 27 '19 at 10:14
  • @Lightsong : As you can see, using only the first 4 lines completely distorts the picture, as the typical asymptotic behavior only starts in line 6. – Lutz Lehmann May 27 '19 at 11:58