9

A Ramanujan summation is a

technique invented by the mathematician Srinivasa Ramanujan for assigning a value to divergent infinite series

In my case, I'm interested in assigning a value to the divergent series

$$\sum_{n=1}^\infty f(n) \ \ \ \ \ \ \ \text{where}\ \ \ \ f(n)=\sqrt[n]{2}$$

According to the Wikipedia page (and my understanding), the Ramanujan summation is

$$\sum_{n=1}^\mathfrak{R} f(n)=\lim_{N\to\infty}\Bigg[\sum_{n=1}^N f(n)-\int_{1}^N f(t)dt\Bigg]$$

Thus

$$\sum_{n=1}^\mathfrak{R} \sqrt[n]{2}=\lim_{N\to\infty}\Bigg[\sum_{n=1}^N \sqrt[n]{2}-\int_{1}^N \sqrt[t]{2}dt\Bigg]$$

Taking the antiderivative

$$\sum_{n=1}^\mathfrak{R} \sqrt[n]{2}=\lim_{N\to\infty}\Bigg[\sum_{n=1}^N \sqrt[n]{2}-\Bigg(\ln2\Big(\text{li}\ 2-\text{Ei}\frac{\ln2}{N}\Big)+N\sqrt[N]{2}-2\Bigg)\Bigg]$$

Moving some constants outside the limit

$$\sum_{n=1}^\mathfrak{R} \sqrt[n]{2}=2-\ln2\cdot\text{li}\ 2+\lim_{N\to\infty}\Bigg[\sum_{n=1}^N \sqrt[n]{2}-\Bigg(N\sqrt[N]{2}-\ln2\cdot\text{Ei}\frac{\ln2}{N}\Bigg)\Bigg]$$

It's at this point I'm unsure of how to proceed. I'm not terribly confident what the limit converges to. From my computational estimates up to $N=10^8$, I find that

$$\sum_{n=1}^\mathfrak{R} \sqrt[n]{2}\approx1.6$$

But due to floating point errors or slow convergence, it deviates substantially enough for me to not be confident about any more digits.

I'd like to know if this converges at all, and if it does, is there a (reasonably) closed form / relation to other constants?

Graviton
  • 4,678

2 Answers2

3

I've not yet seen that definition of Ramanujan-summation as in your first formula.
But for testing I've tried your third formula $$ \sum_{n=1}^\mathfrak{R} \sqrt[n]{2}=\lim_{N\to\infty}\Bigg[\sum_{n=1}^N \sqrt[n]{2}-\int_{1}^N \sqrt[t]{2}dt\Bigg] \tag 3 $$ implemented in Pari/GP for high precision.

First, using W|A I've got the following expression for the integral, setting $ß=\log(2)$ $$ \int \exp( \frac ßt) dt = t\exp(\fracßt) - ß \text{Ei}(\fracßt) + const \tag {4.1} $$ and $$ \begin{align}I(N)&= \int_{t=1}^{N} \exp( \frac ßt) dt \\ &= (N\exp(\fracßN) - ß \text{Ei}(\fracßN))&-(1\exp(\fracß1) - ß \text{Ei}(\fracß1)) \\I(N) &=N\exp(\fracßN) -2 &- ß (\text{Ei}(\fracßN)- \text{Ei}(ß)) \end{align} \tag {4.2}$$ With this I come for some not too large N in the near of your found value: $$K(N) = \sum_{k=1}^N \exp( \fracßk) \tag{4.2}$$ and $$S(N) = K(N) - I(N) \tag {4.3}$$

  N       S(N)        
   100  1.60585777814
  1000  1.60273245539
 10000  1.60242047744
100000, 1.60238928520

It seems to converge, and to the value that you gave in your OP. But doing the serial summation in $K(N)$ to even higher N (to get more accuracy) is time-consuming, so I reconstruct this series such that Pari/GP can calculate this easier when $N$ is in the billions... This is the rewriting as a double series, where the expression for each $\sqrt[k] 2$ is expanded in the exponential-series on $\fracßk$

$$ \begin{array} {} \text{lhs} & =\text{rhs1} &+ \text{rhs2} \\ \hline \exp(ß/1) & = 1+ß/1 & +ß^2/1^2/2! &+ß^3/1^3/3! & + \cdots \\ \exp(ß/2) & = 1+ß/2 & +ß^2/2^2/2! &+ß^3/2^3/3! & + \cdots \\ \exp(ß/3) & = 1+ß/3 & +ß^2/3^2/2! &+ß^3/3^3/3! & + \cdots \\ \vdots & \vdots \\ \exp(ß/N) & = 1+ß/N & +ß^2/N^2/2! &+ß^3/N^3/3! & + \cdots \\ \vdots & \vdots \\ \end{array} \tag 5$$ Looking at the column-sums we see, that the first two columns $\text{rhs1}$ give divergent series, but the following columns $\text{rhs2}$ are convergent. So we operate in two parts: evaluate the RHS2 for the limit $N \to \infty$ immediately by the sum of zetas $$ \text{RHS2}(\infty) = ß^2/2! \zeta(2) + ß^3/3! \zeta(3) + ß^4/4! \zeta(4) + ... \approx 0.473841903568 \tag {5.1} $$ Since the columnsums in RHS1 diverge we reformulate the columnsums up to the N'th partial sum in immediate values in terms of $N$ and of harmonic-numbers $H(N)=\psi(1+N)+\gamma$ (or H(N)=psi(1+N)+Euler in Pari/GP): $ \text{RHS1}(N)=N+ ß \cdot H(N)$ so $$ \begin{array} {}K(N) &= N + ß \cdot H(N) &+ 0.473841903568\\ I(N) &= N\exp(\fracßN) -2 &- ß (\text{Ei}(\fracßN)- \text{Ei}(ß)) \\ S(N) &= K(N)&- I(N) \end{array} \tag {6}$$

giving convergence to the value

  N    S(N)           difference S(N_{k+1})-S(N_k)
 1000: 1.60297258955 
10^5 : 1.60239168746 -0.000580902092269
10^6 : 1.60238640626 -0.00000528119790414
10^12: 1.60238581946 -0.000000586799480429
10^24: 1.60238581946 -5.86800097238 E-13
10^48: 1.60238581946 -5.86800097239 E-25

So I think, your own approximation went into the right way.


P.s. In eq.6 the divergence can be reduced by cancellation; the exponential in $I(N)$ be expanded to $N(1+ß/N) + O(1/N)=N + ß + O(1/N)$ and the divergence in $N$ cancels with that $N$ in $K(N)$. Next, the expression $\text{harm}(N)$ can be rewritten as $\log(N)+\gamma$ for $N \to \infty$ and be put together with the $\text{Ei()}$ - expression in $I(N)$ getting $$ \lim_{N\to \infty} \text{Ei}(ß/N) + \log(N) = \gamma + \log(\log(2)) + O(1/N)$$ leading to the shortened version of eq.6 $$ \begin{array} {} \lim_{N \to \infty} S(N) &= & ß \cdot (\log(N)+\gamma) &+ 0.473841903568\\ &&- ( ß + O(1/N) -2 &- ß (\text{Ei}(\fracßN)- \text{Ei}(ß))) \\ &=& ß\gamma -ß+2 -ß\text{Ei}(ß) &+ 0.473841903568\\ &&+ ß (\text{Ei}(\fracßN)+\log(N) ) \\ &=& 2+ß(2\gamma + \log(ß) -1 -\text{Ei}(ß)) &+ 0.473841903568\\ &=& 1.60238581946 \end{array} \tag {6a}$$ where in the second-last line the divergence $N$ has been cancelled and a constant expression emerged.

1

I'm not sure I'm calculating the same thing, and as regularizations of divergent sums need not be unique, I will ask the OP to do some more numerical work on his formula. The following is how I would find a closed-form expression, 'in the manner of Ramanujan summation.'

$$F_n(x) := \sum_{k=1}^n x^{1/k} = \frac{n}{n}\sum_{k=1}^n (x^{n/k})^{1/n} \to n \int_{0}^1 x^{1/(nt)} dt$$ as a Riemann sum as $n \to \infty.$ This is the first term on the right-hand side of your defining equation $$ \sum_{k=1}^{\mathcal{R}} f(k) = \lim_{n \to \infty} \Big(\sum_{k=1}^n f(k) - \int_{1}^n f(t) dt \Big)$$ and has an explicit closed-form $$ F_n(x) = n \ x^{1/n} + \Gamma(0,\frac{\log(x)}{n} )\log(x) . $$ I've used the incomplete gamma function, but the first argument tells us it will reduce to the Ei integral. By the way, this is a perfectly good asymptotic approximation, giving 4 significant figures for $x=2$ and $n=1000.$ The second term also has an explicit expression,

$$ \int_1^n x^{1/t} dt = -x + n\ x^{1/n} + \log{x} \Big(-\text{Ei}(\log{x}/n)+ \text{li}(x) \Big).$$ (Both integrals have been performed in Mathematica.) Now subtract and do an asymptotic expansion. The singular parts cancel and you are left with

$$ \sum_{k=1}^{\mathcal{R}} x^{1/k} = -\big(x+ \log{(x)}\, \text{li}(x) \big) $$ where li$(x)$ = LogIntegral[x] in Mathematica. When I plug in $x=2,$ I get the value of -2.72445. This is very different from the OP's answer, with not even sign agreement. However, we should not forget the Ramanujan summation is closely aligned with zeta regularization, and with it you can 'prove' an infinite sequence of positive numbers is a negative number, e.g.,

$$ 1+2+3+... = \lim_{s \to \ -1} \sum_{k=1}^\infty k^{-s} = \lim_{s \to \ -1}\zeta(s) = -\frac{1}{12} $$

I'm wary of regularization, though it does lead to some cute results. Physicists are sometimes forced into such shenanigans with perturbation theory.

user321120
  • 6,895
  • This is insightful. Our differing results are intriguing but yours are definitely more concrete. I'm grateful that you generalized beyond the case of $x=2$. – Graviton Nov 12 '20 at 06:15
  • It seems to me, that there is some error in defining $F_n(x)$ . If I insert, for instance $N=100$ and compare the direct sum $\sum_{k=1}^n 2^{1/k}$ and $F_{100}(2)$ I get a difference of about -0.32 which converges quickly to a constant $d = -0.326832...$ when I insert higher $N$ (and even very big $N$ and for the direct sum my second version of $K(N)$). I don't have so far a description of the difference-constant d. – Gottfried Helms Nov 22 '20 at 23:26
  • The negative of the const $d=-0.326832...$ seems to be $ (2 \gamma +\log(\log(2))-1) \log 2+C_{0.473}$ where the constant $C_{0.473}$ is the evaluation of the RHS2 in my posting. – Gottfried Helms Nov 23 '20 at 00:19