Consider a singly recursive function $f$, which, say, has the following form:
f(a):
if a is some base case: return something
b = pre-processing(a)
c = f(b)
d = post-processing(a,b,c)
return d
I assume that pre-processing and post-processing do not raise any exception or anything like that, do not do any function nor recursive call, etc. If there is no post-processing, we have a tail recursive call. This means that one can use tail-call optimization to avoid adding any new stack frame on the call stack.
My question is whether such an optimization can also always be performed even in the presence of post-processing.
One example of such an optimization is tail recursion modulo cons. So in a sense, my question is to understand how general is this technique, and which kind of post-processing allows for such an optimization in general. As an example, consider the factorial function:
factorial(n):
if n < 2: return 1
f = factorial(n-1)
r = n*f
return r
The call stack for such a function will take space $O(n)$, but it is very classical to add an accumulator to make it tail recursive. The accumulator has size only $O(1)$ (number of integers). Can we do that more generally?
As a final remark, I fully understand that you can always remove any recursive call and only working with iterative function, or to transform any recursive call into a tail recursive one by means of continuation-passing style. But this is a some sense cheating since you simulate the call stack. Therefore I am asking whether you can transform a function as above into an iterative or tail recursive function, without simulating the call stack.