6

I saw a lot of article in Math SE like Why does 1+2+3+⋯=−1/12? and S=1+10+100+100+10000+…=−1/9? How is that and lot of others. Also I saw this one of Ramanujan summation but I do not get the contradiction.

I do not want to explain how the sum of such series is calculated since I read these articles but I want an explanation of the logic of these series.

  • Are these a contradictory results?
  • Where is the logic behind such series?
  • How come the sum of infinite positive numbers is equal to a negative one?
  • Is the problem with infinity $\infty$?
  • If someone uses this result then this someone can create a lot of absurd results ($1=0$), how to explain this please?

I appreciate your help. Thanks.

npisinp
  • 690
  • 3
    Its comparable to the idea that a function with a discontinuity can still have a limit. Technically, $sin(x)/x$ has no value at $x=0$, just as the sum $1+2+3...$ has no value, but by following carefully constructed rules about limits/sums a value that makes sense in certain contexts is obtained. – Foo Barrigno Apr 01 '14 at 15:26
  • 1
    In a way, one might say one is *not actually adding* $1+2+3+\dots$, but doing something *entirely else* when they say that it equals $-1/12$. – Simply Beautiful Art Sep 06 '17 at 19:26

4 Answers4

6

L. Euler explained his assumptions about infinite series - convergent or divergent - with the following idea (just paraphrasing, don't have the article at hand, but you can look at the Euler-archives the treatize "De series divergentibus"): The evaluation of an infinite series is different from a finite sum. But always when we want to assign a value for such a series we should do it in the sense, that it is the result of an infinitely applied arithmetic operation - so that the geometric series (to which we meanwhile assign a value) occurs as result of the infinite formal long-division $s(x) = {1 \over 1-x } \to s(x) = 1 + x + x^2 + ... $ and then insert the value for $x$ in the finite rational formula.

Possibly this is meant in a sense, that similarly we can discuss infinite periodic continued fractions as representations of finite expressions like $\sqrt{1+x}$ and others. It is "compatible" somehow to an axiom, that we require for number theory that we can have a closed-form representation for general infinitely repeated (symbolic) algebraic operation. (in the german translation of E247 this occurs in §11 and §12)

From this, I think, for instance Euler-summation and other manipulations on infinite (convergent and divergent) series by L. Euler can be nicely understood.

[update] The Euler-archives seem to have moved to MAA; the original links, for instance //www.eulerarchive.com/ is taken over by some completely unrelated commercials. A seemingly valid link to Ed Sandifer's column "How Euler did it", however only accessible via internal MAA-access is this (but I think via webarchive.org one can still access the former existent openly available pages)

[update 2]: here is a currently valid link to Ed Sandifer's article

3

I have a new idea.

The sum of the natural numbers is $$ S_n = \sum_{k=1}^n k. $$ We define the function $$ G_n(\epsilon) = \sum_{k=1}^n k \exp(-k\epsilon). $$ Abel sum is $$ S_A = \lim_{\epsilon \to 0+} \left( \lim_{n \to \infty} G_n(\epsilon) \right). $$ Unfortunately, it diverges.

Then we define a new function $$ H_n(\epsilon) = \sum_{k=1}^n k \exp(-k\epsilon) \cos(k\epsilon). $$ The function is damped and oscillating. The damped oscillation sum is $$ S_H = \lim_{\epsilon \to 0+} \left( \lim_{n \to \infty} H_n(\epsilon) \right). $$ Surprisingly, it converges on -1/12.

We can confirm the result by the numerical computation. Please input the following formula to the page of Wolfram Alpha.

lim sum k exp(-kx)cos(kx),k=1 to infty,x to 0+

Or click the following URL with the above formula, please.

https://www.wolframalpha.com/input/?i=lim+sum+k+exp%28-kx%29cos%28kx%29%2Ck%3D1+to+infty%2Cx+to+0%2B

We can find the paper by searching the following keywords.

Zeta function regularization of the sum of all natural numbers by damped oscillation summation method

  • 1
    When you get the Abel sum, render it as a Laurent series about $\epsilon=0$. Divide by $\epsilon\cdot2\pi i$ and integrate counterclockwise around a circle centered at $\epsilon=0$ (thus obtain the Cauchy principal value). This converges to the zero-power coefficient of the Laurent series, which is $-1/12$. – Oscar Lanzi Apr 13 '25 at 23:45
1
  • The results are generally contradictory if you attempt to manipulate the results, and there are certain properties these results attempt to maintain, and if you mess around with them too much, you may lose certain properties, including regularity, linearity, stability, and finite re-indexing.
  • The logic behind these series that produce results that generally don't make sense occur because we have developed formulas, results, and methods to find results that make sense when they can make sense. Then someone decided "hey, this doesn't seem like a logical idea, but why don't I apply these ideas to stuff that it shouldn't work on?" And then, well, you know... And then we were like "wow, these results agree with other results that don't make sense, so I guess if one result is right, that result must also be right, and so must the next... even if none of them should make sense at all..."
  • As a wise man once said, the infinite diverging summation may do some analytical pole stuff, thus producing illogical results like a lot of positive numbers adding up to negative numbers, stuff I don't really understand.
  • The problem with the infinity is the analytical pole stuff...
  • The results can produce odd things, like I said, because of the way you attempt to manipulate your result and summation, I mention the properties in the first bullet.

I hope I've made a contribution and, as my teacher once said, "I hope that you leave today more comfortably confused than yesterday."

1

My view is that it's best to focus on what the math itself is telling us. Series arise in mathematics as the value of functions evaluated at some point when this is computed using some method that yields a series, like e.g. a Taylor expansion.

Whatever method you use to generate the series, the answer for the value of the function presented to you by the math itself, is not that you should add up an infinite number of terms of the series. In fact, addition is only defined for a finite number of terms, so the math couldn't possibly tell you to add up an infinite number of terms!

What the math tells you is to add up a finite number of terms of the series and then to add a remainder term to that finite sum. And this is true for both divergent and convergent series, the math itself doesn't distinguish between these types of series. Then with theorems that allow you to bound the remainder term, you can get to approximations of the sum, but to evaluate the sum exactly, you need to be able to calculate the remainder term.

When we're presented a series without any function being specified whose expansion yields the given series, we should assume that there exists such a function that when expanded, yields the given series, and that this function is maximally analytic, meaning that any nonanalytic behavior is dictated by the series. The value of the series is then the value of the function at the point that corresponds to the value of the expansion parameter.

This is then consistent with the standard definition of the value of a convergent series as the limit of the partial sums, because maximal analyticity implies that the remainder term must tend to zero for covergent series.

In case of divergent series, one can argue as follows. If the partial sum of the first $n$ terms is denoted as $S(n)$ with the corresponding remainder term denoted as $R(n)$, then the value of the infinite series is $S(n) + R(n)$. This means that the value of the series is equal to the constant term in $S(n)$ plus the constant term in $R(n)$ at any arbitrary point. As I've explained here, the constant term in the expansion around infinity of the expression:

$$\int_{N-1}^N R(x) dx\tag{1}$$

must be zero. This then fixes the constant term in $R(n)$. We then obtain the expression for the value of the sum of the infinite series as:

$$\operatorname*{con}_{N}\int_{N-1}^NS(x)dx\tag{2}$$

where $\displaystyle \operatorname*{con}_N$ denotes taking the constant term from the large $N$ expansion. I give a number of examples of calculations o the sums of divergent series using (2) in this other more elaborate answer.