5

Two summation methods $\Sigma_1, \Sigma_2 : (\mathbb{N} \rightarrow \mathbb{C}) \rightharpoonup \mathbb{C}$ are consistent iff $\Sigma_1 \cup \Sigma_2$ is functional (right-unique), i.e.

$$ \forall x \in (\operatorname{dom} \Sigma_1 \cap \operatorname{dom} \Sigma_2) : \Sigma_1(x) = \Sigma_2(x) $$

Many of the well-known summation methods (Cesàro summation, Abel summation, Borel summation, Euler summation, etc.) turn out to be consistent with each other. Are there any examples of mutually inconsistent summation methods that are not ad hoc, i.e. motivated by or constructed for the purpose of being mutually inconsistent? If not, is there some explanation behind this fact? Is it possible there's some kind of ideal "general summation" that all these methods are approaching?

Note that this is different from the question of non-constructive extensions such as those given by the Hahn-Banach theorem.

user76284
  • 6,408
  • If I read Konrad Knopp's book "Infinite series" (actually I read the german edition) correctly, then he says that the "assigning a value to a divergent series" is at one point a matter of meaningfulness and of course of consistency with other methods, from which I'd conclude, that simply a summation-procedure which does not fulfill that requirement does not "survive" in the common math universe - so I think you wouldn't find such instances. (...) – Gottfried Helms Nov 26 '20 at 10:56
  • (...) But for an exercise of checking of meaningfulness and of consistenca, you might be interested in a summation-procedure which I proposed a couple of years ago, and of which I've been much convinced in that regards. It claims to sum up series like $1-e^x+e^{e^x}-...$ which should be un-summable by the classical textbook-methods; however I had an interesting discussion about this in a math-forum, and excerpted the significant parts of discussion. Perhaps the following two texts create some curiosity... (...) – Gottfried Helms Nov 26 '20 at 11:08
  • (...) A discussion of my idea in the newsgroup sci.math.research http://go.helms-net.de/math/tetdocs/IterationSeriesSummation_1.htm and my initial idea put down in a pdf-file http://go.helms-net.de/math/tetdocs/10_4_Powertower_article.pdf . Note I've written that articles in a much naive and explorative situation without the tools for rigorous analysis and proof. But it shows an example for a divergent series summation procedure, which has so far no status yet (and might be accepted or be thrown away, depending on later verification). – Gottfried Helms Nov 26 '20 at 11:13

1 Answers1

1

I can think about two examples.

  1. Analytic continuation. This very often gives different results depending on what function we chose to continue analytically.

  2. Ramanujan's summation. There are several slightly different methods under this name, one method for instance, when compared to zeta regularization, removes the pole at $\zeta(1)$. Thus, it gives the correct result $\gamma$ for harmonic series, but around the pole its values differ from zeta function.

Also, technically Ramanujan's summation depends on the derivatives of the function to be summed up, and as such, on the values at non-integer points. But this problem can be avoided by restricting the method only to Newton-analytic functions (those which are equal to their Newton series expansion). This kind of functions, basically, has no arbitrary variations between points, their values at non-integer points are fully determined by the values at integers.

Anixx
  • 10,161