4

As is known to all, if $a_n=n$ and $s=0$, then the power series $$f_s(x)=\sum_{n=0}^{\infty}a_n^{s} x^{a_n}=\sum_{n=0}^{\infty} x^{n}=\frac{1}{1-x},\; 0<x<1.$$ It follows that $\lim_{x\to1}(1-x)f_s(x)=1$.

For any integer $n\geq 0$, $n$ can be expanded into binary format $$n=c_0+c_12+c_22^2+\cdots+c_k2^k,$$ for some $k\geq0$, $c_j\in\{0,1\}$, $0\leq j\leq k$. Let $s=1-\log2/\log4=1/2$ and $$a_n:=c_0+c_14+c_24^2+\cdots+c_k4^k,\;n\geq0.$$ Let the lacunary power series $f_{1/2}(x):=\sum_{n=0}^{\infty}a_n^{1/2} x^{a_n}$. Does $\lim_{x\to1}(1-x)f_{1/2}(x)$ exist ?

Mittens
  • 46,352

2 Answers2

5

The answer is negative; in fact $(1-x)f_{1/2}(x)$ oscillates as $x\to 1^-$. The (zeta-style) approach below computes explicitly the Fourier expansion of the oscillating part, as a function of $\log(-\log x)$.


The series $A(s):=\sum_{n=1}^\infty a_n^{-s}$ converges (absolutely) for $s\in\mathbb{C}$ with $\Re s>1/2$, and defines an analytic function there. Now let $A_\pm(s):=\sum_{n=1}^\infty(-1)^{n-1}a_n^{-s}$; since $a_{2n}=4a_n$ and $a_{2n+1}=4a_n+1$, we get \begin{gather} A_\pm(s)=\sum_{n=1}^\infty a_n^{-s}-2\sum_{n=1}^\infty a_{2n}^{-s}=(1-2^{1-2s})A(s); \\ A_\pm(s)=1-\sum_{n=1}^\infty\big((4a_n)^{-s}-(4a_n+1)^{-s}\big). \end{gather} The last series converges for $\Re s>-1/2$, and defines the analytic continuation of $A_\pm(s)$ there.

Thus, we have $A(s)$ analytically continued onto $\Re s>-1/2$, with simple poles at $$ s=\frac12+s_m,\quad s_m:=\frac{m\pi i}{\log2},\quad m\in\mathbb{Z} $$ if $A_\pm(1/2+s_m)\neq 0$, which can be verified numerically for a few values of $m$.


Next, we need an estimate of $|A_\pm(\sigma+i\tau)|$ as $|\tau|\to\infty$, with $\sigma>-1/2$ fixed. Write $$ a^{-\sigma-i\tau}-(a+1)^{-\sigma-i\tau}=a^{-\sigma}\big(a^{-i\tau}-(a+1)^{-i\tau}\big)+(a+1)^{-i\tau}(a^{-\sigma}-(a+1)^{-\sigma}) $$ (with $a>0$). Since $$ |a^{-i\tau}-(a+1)^{-i\tau}|=2\left|\sin\left(\frac\tau2\log\frac{a+1}a\right)\right|\leqslant\frac{|\tau|}a $$ and (one can obtain from Bernoulli's inequality that) $$ |a^{-\sigma}-(a+1)^{-\sigma}|\leqslant|\sigma|a^{-1-\sigma}, $$ we obtain $$ \left|a^{-\sigma-i\tau}-(a+1)^{-\sigma-i\tau}\right|\leqslant(|\tau|+|\sigma|)a^{-1-\sigma}.\tag{*}\label{estimate} $$ If we put $a=4a_n$ and sum over $n$, we get a crude estimate $|A_\pm(\sigma+i\tau)|=O(|\tau|)$.


Coming back to $f_{1/2}(x)=\sum_{n=1}^\infty a_n^{1/2}x^{a_n}$, we use the Cahen–Mellin integral $$ e^{-t}=\frac1{2\pi i}\int_{\sigma-i\infty}^{\sigma+i\infty}\Gamma(s)t^{-s}\,ds\qquad(t,\sigma>0) $$ with $\color{blue}{\sigma>1}$ to justify the interchange of $\sum$ and $\int$ below: \begin{align} f_{1/2}(e^{-t})&=\frac1{2\pi i}\sum_{n=1}^\infty a_n^{1/2}\int_{\sigma-i\infty}^{\sigma+i\infty}\Gamma(s)(a_n t)^{-s}\,ds \\&=\frac1{2\pi i}\int_{\sigma-i\infty}^{\sigma+i\infty}\Gamma(s)A(s-1/2)t^{-s}\,ds \\&=\frac1{2\pi i}\int_{\sigma-i\infty}^{\sigma+i\infty}\frac{\Gamma(s)A_\pm(s-1/2)t^{-s}}{1-4^{1-s}}\,ds. \end{align}

The crude estimate above shows that, as $|\Im s|\to\infty$, the exponential decay of $|\Gamma(s)|$ "outperforms" the (possible) growth of $|A_\pm(s-1/2)|$. This allows us to shift the line of integration to $\color{blue}{0<\sigma<1}$, using the residue theorem: \begin{align} f_{1/2}(e^{-t})&=\frac1{2t\log2}\sum_{m\in\mathbb{Z}}\Gamma(1+s_m)A_\pm(1/2+s_m)t^{-s_m} \\&+\frac1{2\pi i}\int_{\sigma-i\infty}^{\sigma+i\infty}\frac{\Gamma(s)A_\pm(s-1/2)t^{-s}}{1-4^{1-s}}\,ds. \end{align} The integral is now $O(t^{-\sigma})$ as $t\to 0^+$, and we can take $\sigma$ arbitrarily close to $0$.

And the sum is the (non-constant) Fourier expansion announced at the beginning.


ADDENDUM on computation of $A_m:=A_\pm\left(\frac12+\frac{m\pi i}{\log2}\right)$ (and $A_\pm(s)$ in general). Let $$ A_\pm(s)=S_N(s)+R_N(s),\qquad R_N(s):=\sum_{n=N+1}^\infty\big((4a_n)^{-s}-(4a_n+1)^{-s}\big). $$ The estimate \eqref{estimate} above, and the inequality $4a_n\geqslant n^2$, imply $$ \big|R_N(\sigma+i\tau)\big|\leqslant(|\tau|+|\sigma|)\sum_{n=N+1}^\infty n^{-2-2\sigma}\leqslant\frac{|\tau|+|\sigma|}{1+2\sigma}N^{-1-2\sigma}. $$ This estimate at $N=2$ is (already) sufficient to prove that $\color{blue}{A_1\neq 0}$.


To accelerate the convergence for more efficient computations, consider $$ \Sigma_k(s):=S_{2^k-1}(s),\qquad\Delta_k(s):=R_{2^k-1}(s). $$ It appears that $\Delta_k(s)$ has an asymptotic expansion of the form $$ \Delta_k(s)\asymp\sum_{j=0}^{(\infty)} c_j(s)2^{-(2j+2s+1)k}.\qquad(k\to\infty) $$ This eventually leads to the following "recipe".

Let $\sum_{j=0}^r p_{r,j} x^j:=\prod_{j=0}^{r-1}(4^j x-1)$ for a (small) positive integer $r$, and define $$ \Sigma_k^{(r)}(s)=\frac{\sum_{j=0}^r 2^{(2s+1)j}p_{r,j}\Sigma_{k+j}(s)}{\sum_{j=0}^r 2^{(2s+1)j}p_{r,j}}. $$ Then $\Sigma_k^{(r)}(s)$ is computed in roughly the same time as $\Sigma_{k+r}(s)$ (if the values are computed in sequence with increasing $k$) but converges to $A_\pm(s)$ much faster. This way, I've got the following values.

$m$ $\approx\Re A_m$ $\approx\Im A_m$
$0$ $\phantom{-}0.9301188714777068256428634$ $\phantom{-}0$
$1$ $\phantom{-}0.6530947000949628569271101$ $-0.4875373269525809327441595$
$2$ $\phantom{-}0.1502485249871372257970461$ $-0.4820952084817942842557168$
$3$ $-0.0092994993442554458938163$ $-0.0963080518133936172598479$
$4$ $\phantom{-}0.2783414309554035350627076$ $\phantom{-}0.1200023391019127895989171$
$5$ $\phantom{-}0.5592044887794212271016653$ $-0.1168455562119756445211653$
metamorphy
  • 43,591
  • Hello @metamorphy . Thank you for your reply. Your answer gave me a lot of inspiration, but I still couldn't read it too well. Could you please clarify the details more? If there are any tutorials or theories that can help understand, please advise. – user130405 Aug 04 '24 at 09:25
  • For example, you said the oscillating part of $(1-x)f(x)$ as a function of $\log(-\log x)$. But $(1-x)f(x),;(x\to 1^{-})$ is bounded. The simple calculation is as follows. – user130405 Aug 04 '24 at 09:26
  • By using your previous conclusion https://math.stackexchange.com/a/3276562, we get the limit $$ \lim_{x\to1^{-}}(1-x)\sum_{n=1}^{\infty}4^{n} x^{4^n}=1/\ln 4.;;;(1) $$ Let $\Omega_n={c_0+c_14+c_24^2+\cdots+c_n4^n:c_i\in{0,1}, 0\leq i\leq n}$ for $n\geq 0$. Then $$f_s(x)=\sum_{n=0}^{\infty}a_n^{s} x^{a_n}=\sum_{a\in\Omega_0} a^{s} x^{a}+\sum_{n=1}^{\infty}\sum_{a\in\Omega_n\setminus\Omega_{n-1}} a^{s} x^{a},; 0<x<1.$$ Note that $2^{n}(4^{n})^{s}x^{2\times4^n}\leq\sum_{a\in\Omega_n\setminus\Omega_{n-1}} a^{s} x^{a}\leq 2^{n}(2\times4^{n})^{s}x^{4^n}$ and $s=1/2$. – user130405 Aug 04 '24 at 09:29
  • Hence $$h(x):=1+\sum_{n=1}^{\infty}4^n (x^2)^{4^n}\leq f_{\frac12}(x)\leq1+\sqrt{2}\sum_{n=1}^{\infty}4^n x^{4^n}=:g(x),; 0<x<1.$$ By (1), we have $h(x)\sim\frac1{1-x^2}\sim \frac{1}{2(1-x)}$ and $g(x)\sim\frac{\sqrt{2}}{1-x}$ as $x\to 1^{-}$. This implies $$\frac12\leq(1-x)f(x)\geq \sqrt{2};;(x\to 1^{-}).$$ – user130405 Aug 04 '24 at 09:30
  • Dear @metamorphy, thank you. I understand this question now. But I have another question in your proof: why the sum $$ \sum_{m\in\mathbb{Z}}\Gamma(1+s_m)A_\pm(1/2+s_m)t^{-s_m} $$ is the (non-constant) Fourier expansion? As you mentioned above, this is equivalent to $A_\pm(1/2+s_m)\neq 0$ for some $m\neq 0$. You say it can be verified numerically for a few values of $m$, I want to know how to prove it theoretically or numerically? – user130405 Aug 24 '24 at 17:49
  • @user130405: See the updated answer. – metamorphy Aug 29 '24 at 03:47
0

A real-analysis way following these lines, with the lower/upper limits computed.

Let's extend $a_n$ to non-integers first. Define $\lambda:\mathbb{R}_{\geqslant0}\to\mathbb{R}_{\geqslant0}$ by $$ \lambda\left(\sum_{n\in\Omega}2^n\right)=\sum_{n\in\Omega}4^n $$ for any $\Omega\subset\mathbb{Z}$ with $\sup\Omega<\infty$ and $\inf(\mathbb{Z}\setminus\Omega)=-\infty$.

Then, following the answer linked above, we find: $$ \color{blue}{\lim_{m\to\infty}4^{t-m}f_{1/2}(e^{-4^{t-m}})=4^t F(4^t),} $$ where $F(x)=\int_0^\infty\lambda(y)^{1/2}e^{-x\lambda(y)}\,dy$ for $x>0$.

Again, $t\mapsto 4^t F(4^t)$ is a $1$-periodic function. To compute its minimum/maximum, we need a quick enough computation of $F$ (and, perhaps, its derivative). The idea I have is to use $$ F(x)=-\frac1{\sqrt\pi}\int_x^\infty\frac{G'(y)}{\sqrt{y-x}}\,dy, $$ where $G(x)=\int_0^\infty e^{-x\lambda(y)}\,dy$ has a nice infinite-product expansion: $$ G(x)=\prod_{n=0}^\infty(1+e^{-4^n x})\prod_{n=1}^\infty\frac{1+e^{-4^{-n}x}}2. $$

After some simplifications, I wrote the following PARI/GP script:

foo(x) = 2^x * prodinf(n=0, exp(-4^(x+n)), 1) * prodinf(n=1, expm1(-4^(x-n))/2, 1);
goo1(n,x) = { my(y=4^(x+n), z=exp(-y)); return(z*y/(1+z)) };
goo2(n,x) = { my(y=4^(x+n), z=exp(-y)); return(z*(y/(1+z))^2) };
goo0(n,x) = { my(y=4^(-x-n)); return(y/sqrt(1-y)) };
foo1(x) = suminf(n=0, goo1(n,x)) + suminf(n=1, goo1(-n,x));
foo2(x) = suminf(n=0, goo2(n,x)) + suminf(n=1, goo2(-n,x));
foo0(x) = suminf(n=0, goo0(n,x));
ker1(x) = foo(x)*foo1(x);
ker2(x) = { my(y=foo1(x)); return(foo(x)*(y*(3-2*y)-2*foo2(x))) };
objfun(x) = intnum(t=[0,-1/2], 1, foo0(t)*ker1(x+t));
objder(x) = intnum(t=[0,-1/2], 1, foo0(t)*ker2(x+t));
objmin = 2*log(2)/sqrt(Pi) * objfun(solve(x=0.8,0.9,objder(x)));
objmax = 2*log(2)/sqrt(Pi) * objfun(solve(x=0.3,0.4,objder(x)));

that computes the lower limit as objmin and the upper limit as objmax: \begin{align} \liminf_{x\to 1^-}\ (1-x)f_{1/2}(x) &= 0.66586394843939004821628799734260998\cdots \\ \limsup_{x\to 1^-}\ (1-x)f_{1/2}(x) &= 0.67601923344314595333773524756892407\cdots \end{align} (compare with the "mean value" $A_0/(2\log2)\approx0.670938941659\cdots$).

metamorphy
  • 43,591