2

I'm sure this is not a challenge for you but it remains an open question for me:

Is it wise to prefer a recursive algorithm over its for-loop counterpart?

E.g. take the evaluation of the natural logarithm of a number $N + 1$

$$ \ln (N+1) = \ln N + \Delta N, $$

where

$$ \Delta N = 2 \sum_{k=0}^{\infty} \frac{1}{(2k+1)(2N+1)^{2k+1}}. $$

While one could implement it as a recursive function with an appropriate terminating condition (e.g. relative error tolerance), you could as well put it in a for-loop and break when desired.

Max Herrmann
  • 153
  • 8

5 Answers5

5

The answer will depend on the compiler. As @vonbrand wrote, "Given a good enough compiler, you might even get the very same object code." In particular, good compilers will do tail-call elimination. In some cases this can effectively transform the code into a for-loop. Your example looks like a good example of an instance where this could happen.

As @vonbrand says, "And unless circumstances are exceptional, your time is much more valuable than the computer's, so think first in terms of ease of writing/understanding the program". Or, to quote Tony Hoare and Donald Knuth: "Premature optimization is the root of all evil." To this I'd add, focus first on algorithm and data structure improvements (where one can often make the biggest improvements) and on user needs: and then if performance of this particular part of the code is really critical, try both a loop and recursion and measure. Don't rely on your intuition; measure.

D.W.
  • 167,959
  • 22
  • 232
  • 500
2

I don't think recursive OR for-loop are related to the abstract idea of an algorithm, rather both are a specific strategy to implement an algorithm on a computing system. So your question is basically about which implementation strategy is better for algorithm - recursive or loop based.

The answer (assuming you want to implement the algorithm on general purpose of the shelf CPU) would be for-loop perform better as the recursive call would include the overhead of call stack which will grow for each recursive call.

Ankur
  • 628
  • 3
  • 12
2

It is commonly agreed that loops (for or while) lead to code that is a bit faster than equivalent code based on recursive calls.

However, this speed improvement is small and should only be sought for crucial loops, that is, loops that will be run many millions of times. For the rest of the code (probably 99% of it) clarity, robustness, ease of debugging and modifying should be the priority. Choose recursion when it is clearer, loops when they are clearer. In the long run clearer code will run faster, sooner and for a longer time, than incomprehensible code.

phs
  • 871
  • 5
  • 7
1

In general a true recursive code will execute slower than a loop based counterpart. That said as others have mentioned many compilers have native capabilities that change recursive code into loop based instructions.

That said I feel another point to bring up is that recursive code can lead to memory leakage in the case that it doesn't terminate whereas its loop based counterpart does not entail this.

1

In support of the argument that clarity shall be addressed, I only wanted to add to the discussion the fact that some functions are inherently recursive (and they belong to the bottom of the hierarchy of recursive functions known as primitive recursive functions). In fact, a good example of such functions are super-exponential functions (e.g., $n^{m^u...}$ where $n$, $m$ and $u$ are related to the problem size) where one could not easily imagine a simple way to implement them properly in a loop (or a number of nested loops), in particular if they consist of a recursive invocation where at least one argument is also solved recursively.

One of such examples is the Ackerman's function. This function is solved according to three different cases. The third one solves Ackerman(m,n) as Ackerman (m-1, Ackerman(m, n-1)). This video (The most difficult program to compute?) provides an excellent introduction to these concepts and others. Note that the Ackerman's function is decidable and one can easily prove that it necessarily halts provided that its arguments are non-negative. Computing the values of this function takes just an inconceivable amount of time.

Cheers,

Carlos Linares López
  • 3,483
  • 17
  • 30