0

How do we use it for such cases?

For example, how to find $3$ with the method?

enter image description here

$f(x)=(x-3)^2(x-1)$

Mathrix
  • 190
  • 4
  • 17
  • For $f(x)= 0$ , either $(x-3)^2 = 0$ or $(x-1) = 0$. Solving the first gives two equal values $x=3$ and solving the second gives $x=1.$ – The Demonix _ Hermit Dec 03 '19 at 13:43
  • @TheDemonix_Hermit what is the relation of this with the false position method? – Mathrix Dec 03 '19 at 13:47
  • https://en.wikibooks.org/wiki/Numerical_Methods/Equation_Solving#False_Position_Method – Moo Dec 03 '19 at 13:59
  • 1
    @Moo "The rate of convergence is still linear but faster than that of the bisection method. Both these methods will fail if f has a double root. " Thank you so much. Wikipedia is meaninglessly censored here, so it is hard to search for it. – Mathrix Dec 03 '19 at 14:02
  • 1
    This statement about the convergence rate is wrong in this generality. The rate of convergence depends on how close to the root the stalling occurs. "Closeness" is dependent on the slope and curvature of the function. Variations of the regula falsi restore super-linear convergence (that is, "much better than bisection"), while also shrinking the interval length to zero. – Lutz Lehmann Dec 03 '19 at 14:44

1 Answers1

1

The root $x=3$ can not be found with the false position method. Any root found by a bracketing method has to be a zero crossing, with values of both signs in every neighborhood of the root. You also need to start the method with a bracketing interval, which here has to contain the other single root $x=1$. The method will converge to that root, possibly involving a longer initial phase where one interval end point moves towards $x=3$.

For a polynomial function, you can eliminate the multiple roots by computing the GCD with its derivative and dividing it out. This depends on the coefficients being integers or given as rationals.


Starting the regula falsi method for the given polynomial on the interval $[-2,5]$, the method in some sense "finds" the root at $x=3$ in that the active interval end moves toward that point, the lower end $-2$ never changing. However, the bracketing interval does not change and the convergence is very slow, for instance $x[ 5] = 3.50$, $x[ 8] = 3.38$, $x[ 12] = 3.30$, $x[22] = 3.20$, $x[ 55] = 3.10$.

Changing to the Illinois modification does not help much, as this anti-stalling measure is geared towards simple roots in the standard situation of a convex increasing function (or any of its flipped variants) over the interval. The double root still leads to long stalling segments. Enhancing this variant by a stalling count and an over-relaxed Aitkens delta-squared formula restores fast convergence to the root $-1$.

Lutz Lehmann
  • 131,652
  • I don't think you need integer or rational coefficients to divide out the repeated roots, depending on what you mean; the division algorithm itself is valid over any field and even if you want it to be done algorithmically, everything's okay as long as you're in a field where you can tell if an expression is zero or not (notably, for instance, having algebraic numbers as coefficients is okay). – Milo Brandt Dec 03 '19 at 14:51
  • 1
    @Milo Brandt : Yes, as long as you have exact algebraic coefficients. However, we are discussing numerical algorithms where coefficients and roots are floating point numbers, in most cases approximations. You can still do that if you devise a method to decide when an approximate remainder is close enough to zero, this will then (hopefully) reduce the density of root clusters. – Lutz Lehmann Dec 03 '19 at 15:01
  • To your point that the regula falsi method converges in "some sense", this is the same sense in how regula falsi usually converges: only one point approaches the root while the other does not. – Simply Beautiful Art Aug 28 '20 at 03:23