This is what my teacher shared with us.
My instructor asserts that the Regula Falsi method has a superlinear convergence order, specifically citing that the error decreases by a factor related to the golden ratio $(\frac{\sqrt{5} - 1}{2})$ per iteration. However, standard references like Burden and Faires' Numerical Analysis only detail convergence for the Secant and Newton-Raphson methods, omitting Regula Falsi. Online resources also lack clarity, with some suggesting it converges linearly or superlinearly under certain conditions.
My confusion arises because:
Regula Falsi maintains a bracketed root by fixing one endpoint, unlike the Secant method. Intuitively, this might restrict convergence speed.
The claim about the golden ratio factor resembles properties of the Secant method, but I’m unsure if this applies to Regula Falsi.