I'm an undergraduate just beginning to read about reversible computing. I know that, because of Landauer's principle, irreversible computations dissipate heat (and reversible ones do not). I brought it up with my professor, who had never heard of reversible computing before, and he was having difficulty understanding why the theory of reversible computing was not trivial.
His point was just that you can always save the input, i.e. for any function $f: \{ 0, 1 \}^n \rightarrow \{ 0, 1 \}^n$ that you wish to make reversible, define a new function $f_{reversible}: \{ 0, 1 \}^n \rightarrow \{0, 1 \}^{2n}$ (or $\{ 0, 1 \}^{2n} \rightarrow \{0, 1 \}^{2n}$ and you just put $0$s in for the last $n$ bits of the input) which returns the output in the first $n$ bits and the input in the other $n$ bits. Then in order to invert $f_{reversible}$ you just discard the output and return the input that you saved.
My immediate objection was that this takes more memory than the original function did -- though only by a constant factor. Constraining the output to $n$ bits would seem to restore the interesting-ness of the problem, though. Is this what is usually meant by reversible computing?
Another objection seemed to be that when we discard the output, we're doing something irreversible which is going to dissipate heat. But we correctly recovered the initial state, so how could it be irreversible? I don't know enough physics to understand whether the important thing w/r/t heat is just for the entire computation to be reversible, or whether every step needs to be reversible as well, or if this idea is just up the wrong tree.