8

Let $$S_n = \sum_{m=1}^{n} \frac{n \bmod m}{m!}$$

be the $n$-th partial sum of a series with remainder $$R_{n}=\sum_{m=n+1}^{\infty}\frac{n \bmod m}{m!}=n\cdot e - \frac{\lfloor e\cdot n\rfloor}{(n-1)!}.$$

This series occurs when embedding the profinite integers $\hat{\mathbb{Z}}$ into the unit interval $\left[0,1\right]$ using the fractional part of the factorial number system. $S_n$ is a rational number.

Computing $n \% m$ for each $m$ and summing is $\mathcal{O}(n)$, so exponential in the number of digits. I'm wondering if there is a faster way to compute these numbers. Programmatically, arbitrary precision arithmetic is needed as $\frac{1}{m!}$ is tiny.

Initial thoughts:

  1. Factor $n$ and compute the residues modulo prime powers, $n \bmod p^{r_i}$, use divisibility criteria to reduce the number of summands, and the Chinese Remainder Theorem to reconstruct.

  2. Rewrite $n \bmod m=n−m\lfloor n/m \rfloor$, and rewrite the floor using a Fourier series to get $\displaystyle n−m\left(n/m−1/2+\frac{1}{\pi}\sum_{k=1}^{\infty}\frac{\sin(2\pi k n/m)}{k}\right)=m/2−\frac{m}{\pi}\sum_{k=1}^{\infty}\frac{\sin(2\pi k n/m)}{k}$. Rearranging to get an exponential sum would be the next guess, but not confident w/o abs. conv.

See also: https://mathoverflow.net/questions/195325/how-to-calculate-the-sum-of-remainders-of-n

metamorphy
  • 43,591
  • 1
    What units are you counting the complexity in? Even simply writing out $n!$ (let alone computing it) requires $\mathcal{O}(n\log n)$ digits, hence $\mathcal{O}(n)$ time ($\approx$ bit operations) is out of reach. – metamorphy Dec 26 '20 at 07:52
  • Yes, something like bit operations. More practically, is there a speedup so that we can compute $S_{n}$ with $n=10^8$ or so on a normal laptop? True, $n!$ is large, but the question asks about $S_n$, which may or may not require computing $m!$, even though the naive algorithm/sum does include it. For instance, in the linked sum we end up looking at the divisor sum function $\sum_{m}\sigma(m)$, which does not necessarily require computing moduli, or $e = \lim_n (1+1/n)^{n}$, with no mention of factorials. It seems the rational number output has size $\approx n\log(n)$, which doesn't bode well. – Jackson Walters Dec 27 '20 at 14:43
  • 1
    I just started your calculation for $n = 10^8$ using brute force with GMP's (MPIR on Windows) rational integers on a laptop. This means I am calculating your number exactly, with arbitrary integers as the numerator and denominator.

    The only thing I "optimized" is the calculation of the factorial term, which is updated each step. Will let you know how long it takes. If I have to stop it for some reason, I'll run it again tonight.

    – Charlie S Dec 28 '20 at 14:08
  • 2
    Do you want to know this number exactly, or do you have any stopping criteria based on the desired precision? I think you could use $m-1 / m!$ as an upper bound with stirlings approximation (or some such trick) to determine a cutoff to save you a bunch of time. – Charlie S Dec 28 '20 at 14:45
  • 1
    Computing $\sum_{n=a}^b(1/n!)$ is of the same complexity as computing $b!$ alone (hint: divide and conquer). Now, with the "$n-m\lfloor n/m\rfloor$" approach, we may use the fact that, for $1\leqslant m\leqslant n$, $\lfloor n/m\rfloor$ takes only $\mathcal{O}(\sqrt{n})$ values (I've used this idea e.g. here; this is not as efficient as the approach by R. Sladkey from the linked MO question, but it is more simple and universal). – metamorphy Dec 28 '20 at 18:43
  • 1
    So, if we can compute $n!$ in $\mathcal{O}\big(F(n)\big)$ time, then we can compute $S_n$ in $\mathcal{O}\big(F(n)\sqrt{n}\big)$ time [or perhaps a little bit better]. But the issue of computing factorials still remains. A real answer to the question would be a proof that the result is indeed "$\Omega(n\log n)$ bits long". This doesn't feel hard, but looks not that interesting (to me personally). – metamorphy Dec 28 '20 at 18:51
  • 1
    @CharlieS Very cool, thank you! For $n=10^5$, my PC takes about .285s, and 2.98s for $n=10^6$. Certainly exponential. I would note that the output for $n=10^8$ will be ~$1GB$, and not very compressible. To your second point, what I am really interested is being able to compute many values of $S_n$ quickly in order to visualize maps $\mathbb{Z} \rightarrow \mathbb{Z}$ in a compact way, which doesn't require perfect precision. Summing the first $k$ terms is a good approximation as $|S_n - S_k| \le \frac{1}{k!} - \frac{1}{n!}$. Thus, we only need 21 or so terms for 64 bit precision. – Jackson Walters Dec 28 '20 at 23:49
  • @metamorphy Right, as $\sum_{m=a}^{b}\frac{1}{m!} = \frac{\lfloor eb! \rfloor}{b!} - \frac{\lfloor ea! \rfloor}{a!}$ from incomplete gamma function identities, however it's not clear to me that $\sum_{m=a}^{b}\frac{h(m)}{m!}$ always requires computing factorials due to possible cancellation. A rearrangement resulting in $\mathcal{O}(\sqrt{n})$ is about the best I was hoping for. – Jackson Walters Dec 29 '20 at 17:13
  • 1
    @metamorphy Agreed, a proof regarding the output length would put this to bed. Putting $n!$ in the denominator, the un-reduced numerator looks like $\sum_{k=0}^{n-1}(n%(n-k))k!$. The primes in the factorization of $n!$ are small (of length $\le \log(n)$), whereas the primes occurring are potentially more random and much larger, and so intuitively will not cancel and the overall length/complexity of the output will be large. Making that rigorous feels hard to me, and mildly interesting. I suppose this is really a base-factorial to base-10 conversion, and I'm happy with a fast approximation. – Jackson Walters Dec 29 '20 at 17:16

0 Answers0