from math import pow, factorial
if __name__ == '__main__':
# Parameters
x = -40
n_terms = 124
sum = 0
for i in range(124):
sum += pow(x, i) / factorial(i)
print(sum)
Output:
5.881161314775606
We know $e^{-40}$ is basically $0$, but it somehow evaluated to ~6. I'm not sure where the source of this error is coming from. Sure, floats aren't 100% precise, but they're still pretty damn accurate at ~16 digits (base 10) of precision and 64 bits. So the errors from computing each term in the taylor series should be insignificantly small. I find it very hard to believe the errors would accumulate to ~6 after a sum of only 124 terms.
Could you produce a mathematical argument for why an error of ~6 is plausible?
Note: According to the Lagrange Error Bound, the actual sum of the first 124 terms should be extremely close to 0
[abs(pow(-40, i)/factorial(i) - (-40)**i/factorial(i)) for i in range(120)]has maximum entry 2 attained when i is 35 and 39. The integer exponentiation is exact. The error caused by computing the power as float is significant. Even worse, your terms are large and alternate in sign so you get catastrophic cancellation – Matthew Towers Jan 11 '25 at 13:42