3

It is a known fact that floating point precision errors are bound to happen when one forces a computer to deal with very large or very small numbers, especially when both things are done at the same time. I was playing around with similar functions as the one described in this question, that is functions of the type: $$ g(f(x))=\left(1+\frac{1}{f(x)}\right)^{f(x)} \\ $$ With $\lim_{x\to +\infty}{f(x)}=+\infty$ and $f(x)$ a "rapidly" growing function of $x$, such as $e^{x}$, $\Gamma(x+1),x^{9}$ and so on. For each and everyone of these functions the plot I have behave similarly, as shows the first figure. I don't understand the zig-zag behavior very well, but the jump to 1 is easy to understand: $\frac{1}{f(x)}$ becomes so small that the computer treats it as $0$.

enter image description here

That all said, here comes the surprising stuff. It is known that $\lim_{x\to\infty}x^{1/x}=1$, and so $\lim_{x\to\infty}{\Gamma(x+1)^{\frac{1}{\Gamma(x+1)}}}=1$. Hence $tan\left(\pm\frac{\pi}{2}\Gamma(x+1)^{\frac{1}{\Gamma(x+1)}}\right)$ should also go to infinity (in modulus, at least) with $x$. When I plot $g(f(x))$ with this tangent thing as $f$, numeric precision problems also disrupt the function, but they do so in a pretty strange way, as shows the second figure (I added a line for $1$ for comparison). There's no zig-zag (which doesn't trouble me that much actually), but only the function with a positive sign collapses into $1$, the negative collapses into $\approx 6.12961$.

The case for the strange tangent functions

Does anyone know why this particular function has such a strange behavior? Or at least what kind of material should I read to in order to discover the answer?

Is it possible to work out why it collapses into this particular number? Can we find other similar examples? Thank you!

Lutz Lehmann
  • 131,652
  • All of your examples are based on peculiar behavior of floating point limitations which disappear when sufficient numerical precision is used. – Somos Jul 29 '21 at 15:58
  • That is true. But the fourth example is immensely different from the other ones. The behavior of the first three can be easily understood, but for the last one I have no explanation – Danilo Guimarães Jul 29 '21 at 16:41
  • If you really care about this, then I think you need to look into the code for the implementation of $\tan$ and $\Gamma$ on your system. It doesn't surprise me much that different initial conditions lead to behaviour that seems to converge to a common value but then jumps to one of two different equilibria. – Rob Arthan Jul 29 '21 at 22:45
  • See https://math.stackexchange.com/questions/3237707/why-does-left1-frac1n-rightn-give-vastly-different-relative-errors and https://math.stackexchange.com/questions/3238722/why-is-that-the-same-equation-with-different-x-values-produces-drastically-diffe for how the error oscillates between $\mu·f(x)$ and $1/(2f(x))$ for $f(x)>10^8$. That the situation becomes trivial for $f(x)>10^{16}$ was already explained in the linked thread. – Lutz Lehmann Jul 31 '21 at 17:02
  • In floating point precision, especially for the pi constant used, you get $N=\tan(\pi/2)=1.633123935319537·10^{+16}$. While this is large enough to truncate completely $1+\frac1N$ to $1$, it is still not too large to give a non-trivial result in $1-\frac1N$. The value of $(1-\frac1N)^{-N}$ then is, as observed, $6.129614098672075$ – Lutz Lehmann Aug 01 '21 at 09:21

0 Answers0