2

What are the criteria such that a function $f(t)$ can serve as the correction in an iteration function of the form $g (t) = t - \lambda f (t)$ where $\lambda$ is some relaxation factor? It is almost reminiscent of Newton's iteration, without the derivative.

for instance, if $f(t)=sin(t)$ then $g(t)$ has attractive fixed-points at $t=n\pi$ when $n$ is even and repulsive fixed-points when $n$ is odd

if the iteration is $g (t) = t + \lambda f (t)$ then the attractiveness and repulsiveness of the fixed-points is swapped, so in the case of $f(t)=sin(t)$ it would be repulsive when $n$ is even and attractive when $n$ is odd

Do functions which have this property have a special name?

Perhaps the iteration converges to a root $f(t)=0$ where the derivative is $0<f'(t)<2$.

and iterating the function $t-Z(t)$ doesn't converge to any points where Z'(t) is not in [0,2] .. at least for the first 20 odd-numbers zeros I checked.. this table demonstrates

the column on the left is the difference between the starting point (the $2n-1$th zero -0.1 and the 50 iterations of the iteration function, and the column on the right is the derivative of $Z$ evaluated at the $2n-1$th zero

$ \left[ \begin {array}{cc} 0.0& 0.7931604332\\ 0.0 & 1.371721287\\ 0.0& 1.382119539 \\ 0.0& 1.490610763\\ 0.0& 1.568031477\\ - 1.11005001& 2.426579069 \\ 0.0& 1.391805619\\ - 0.38497400& 2.287779010\\ - 0.59958219& 2.186311017 \\ - 0.00000004& 1.779555993\\ - 0.98459094& 2.637886209\\ - 0.48377276& 2.161778835 \\ - 0.32348014& 2.176460788\\ 0.0& 1.479402184\\ - 1.37854250& 3.515767073 \\ - 0.3298843& 2.167414624\\ {\it Float}(\infty )& 2.982497202\\ 0.0& 1.361150829\\ - 14.4201635& 3.119005954 \\ - 0.3041748& 2.294939525\end {array} \right] $

I'm sure I would find the same thing with iteration function $t+Z(t)$ .

To "Fix" this, one can take $t-tanh(f(t))$ then the derivative can be no more than 2.. see does this Newton-like iterative root finding method based on the hyperbolic tangent function have a name?

It is the set of functions whose derivative at the roots is less than 2 and greater than 0. If anyone had a great idea on how to prove this....it is a conjecture based on the empirical fact that iterating this method with the Hardy Z function results in convergence when the derivative at the starting point is 0

crow
  • 1
  • 1
    So that would be, functions whose derivative at the roots is $1$? – leftaroundabout Mar 27 '17 at 19:46
  • @leftaroundabout not sure about that, take the Hardy Z function for instance. you can find roots this way but the derivative at its roots is not 1. it seems if the derivative is 1 then that corresponds to a superattractive fized-point (multiplier 0) where the multiplier is the derivative of the iteration function evaluated at the root – crow Mar 27 '17 at 19:55
  • "It is basically Newton's iteration, without the derivative" is a little exaggerate, don't you think ? – Jean Marie Mar 27 '17 at 20:27
  • @JeanMarie lol. not really. $t-f(t)/f'(t)$ vs $t-f(t)$ ? delete the derivative and replace it with 1, then you have crow's iteration. i should have said its the relaxed Newton iteration without the derivative perhaps – crow Mar 27 '17 at 20:33
  • Let us share some humor: A step more, I delete $f(t)$ in $t-f(t)$, it remains $t$ which, by transitivity is still close to Newton's iteration :) – Jean Marie Mar 27 '17 at 20:36
  • @JeanMarie Hmm, but Im not sure the identity map can be used to find roots, and every point is a fixed-point of the identity :) Ok updated, it only almost resembles Newton's iteration – crow Mar 27 '17 at 20:39
  • @leftaroundabout your comment made me think.. maybe it is , functions whose derivative at the roots is less than 2 and greater than 0 – crow Mar 27 '17 at 23:39
  • You didn't define what you are looking for. Is it roots of $f(t)$? How is $g$ derived from $f$? This is a nonsense question. You could certainly ask when an iteration of $t_{i+1}=t-\lambda f(t)$ as a search for roots of $f(t)$ will succeed, which is my wild guess to be what you are asking. Then think about the requirement of the derivative of the iteration function to see where this works. -1 and I wish I could minus more. – Ross Millikan Sep 29 '17 at 05:23

1 Answers1

1

The crux of your method is the $\lambda$ factor. It is a variation of "trial and error". It is a perfectly valid method when used with caution. I don't know of any particular name for it but it is a very simple and natural method. The effectiveness of the method can be improved by adjusting the $\lambda$ factor depending on how close $f(t)$ is to zero and perhaps other data. One example of this is Newton's method as mentioned in a comment.

Somos
  • 37,457
  • 3
  • 35
  • 85