14

I meet the following integral when I am reading materials regarding the stable distribution: $$ \frac{1}{\pi}\int_{0}^{\pi}\left\{% \frac{\sin^{\alpha}\left(\alpha u\right)\ \sin^{1-\alpha}\,\left(\,{\left[1 - \alpha\right]u}\,\right)\ }{\sin\left(u\right)} \right\}^{\rho/\alpha}\,\mathrm{d}u, $$ where $\alpha \in \left(0,1\right)$. $$ \mbox{ It looks the result shall be}\quad \frac{\Gamma\left(1 - \rho/\alpha\right)} {\Gamma(1 - \rho/\alpha + \rho)\,\Gamma(1-\rho)}, $$ where $-1 < \Re\left(\rho\right) < \alpha$.

However, I have no idea how to solve it. Wondering someone can help me on this.

Thanks in advance

13/11/2020

I would like to thanks Pisco for the kind and patient help. Now I can understand his proof perfectly. There is a related integral, $$\int _0^{\infty }\frac{1}{\pi }\int _0^{\pi }\exp\left\{-q x^{-\frac{1-\alpha }{\alpha }} \left(\frac{(\sin[\alpha u])^{\alpha }(\sin[(1-\alpha )u])^{1-\alpha }}{\sin [u]}\right)^{\frac{1}{\alpha }}\right\}du e^{-x}dx=e^{-q^{\alpha }}, \quad \alpha \in (0,1),\ (*)$$

Further, since for the unilateral stable distribution, $S_\alpha$, $\alpha \in (0,1)$, the density function is $$f_{S_\alpha}(x)=\frac{1}{x}\sum_{k=1}^\infty\frac{(-x^\alpha)^{-k}}{k!\Gamma(-k\alpha)}, \ x>0,$$ and we know the Laplace transform of the above function is $e^{-q^\alpha}$. For example, see "2016A new family of tempered distributions". If we take the Laplace for $f_{S_\alpha}$ and leave the requirement for the exchange of the integral and sum for a while, we have $$\mathcal{L}_{S_\alpha}(q)=\sum_{k=1}^{\infty}\frac{(-q^\alpha)^k}{k!\Gamma(-k \alpha)}\Gamma(-k\alpha)=e^{-q^\alpha}-1,$$ which is not what we are expecting. Therefore I am also wondering how to find $f_{S_\alpha}$ from $(*)$.

Looking for your help.

  • 1
    Was the result given ? If not, how did you get it ? – Claude Leibovici Jul 21 '20 at 07:24
  • This is mainly through the stable distribution, however, I have not figured out the calculations totally, as mentioned here. You may want to take a look at the following paper "2014On simulation and properties of the stable law". – gouwangzhangdong Jul 21 '20 at 13:36

1 Answers1

11

The claim seems highly nontrivial, this integral definitely deserves more attention.

It is equivalent to prove

$$\int_0^\pi {{{\left[ {\frac{{\sin^a {{(ax)}}\sin^{1-a} {{((1 - a)x)}}}}{{\sin x}}} \right]}^b}} dx = \pi \frac{{\Gamma (1 - b)}}{{\Gamma (1 - b + ab)\Gamma (1 - ab)}}\qquad 0<a,b<1$$


Let $\alpha>0$, $x>0$, $-\pi/2<\varphi<\pi/2$. Consider $$I=\Re \int_0^\infty \exp \left( {itx - {t^\alpha } {e^{i\varphi }}} \right)dt $$ Writing $t$ in polar coordinate $t = re^{i\theta}$, we have $$\Im(itx - t^\alpha e^{i\varphi }) = rx\cos\theta - r^\alpha \sin(\alpha\theta+\varphi)$$ so (a portion of) $\{t\in \mathbb{C} \vert \Im(itx - t^\alpha e^{i\varphi})=0 \}$ can be parametrized by $$\tag{*}r(\theta) = {\left( {\frac{{ \sin (\alpha \theta + \varphi )}}{{x\cos \theta }}} \right)^{1/(1 - \alpha )}}$$ We shall choose $\alpha, \varphi$ so that the term inside the parenthesis is positive when $-\varphi/\alpha < \theta < \pi/2$. We will concentrate on the case $0<\alpha<1$. In this case, $r(\theta)$ travels from $0$ to $\infty$ as $\theta$ increases from $-\varphi/\alpha$ to $\pi/2$, let $$\Gamma = \{ re^{i\theta}| -\varphi/\alpha < \theta < \pi/2, r =r(\theta)\}$$ ss

The integrand is decreasing fast enough at infinity to enable the deformation of integration path, giving $$I=\Re \int_\Gamma \exp \left( {itx - {t^\alpha }{e^{i\varphi }}} \right)dt = \int_\Gamma \exp( \Re({itx - {t^\alpha } {e^{i\varphi }}})) d(\Re t)$$ the second equality holds because the exponential is real on $\Gamma$. Let $$\mathcal{K}(\alpha,\varphi,\theta) = \frac{{{{( {\sin (\alpha \theta + \varphi )} )}^{\alpha /(1 - \alpha )}}\cos ((\alpha - 1)\theta + \varphi )}}{{{{(\cos \theta )}^{1/(1 - \alpha )}}}}$$ Using parametrization $(*)$, one calculates (details omitted) $$\begin{aligned}&\Re({itx - {t^\alpha } {e^{i\varphi }}}) = -rx \sin\theta - r^\alpha \cos(\alpha\theta+\varphi) = - {x^{\alpha /(\alpha - 1)}}\mathcal{K}(\alpha,\varphi,\theta) \\ &d(\Re t) = d(r\cos\theta) = \frac{\alpha }{{1 - \alpha }}{x^{ - 1/(1 - \alpha )}}\mathcal{K}(\alpha,\varphi,\theta) d\theta \end{aligned}$$ Therefore $$\tag{**}\Re \int_0^\infty {\exp \left[ {itx - {t^\alpha }{e^{i\varphi }}} \right]dt} = \frac{\alpha }{{1 - \alpha }}{x^{1/(\alpha - 1)}}\int_{ - \varphi /\alpha }^{\pi /2} {\exp \left[ { - {x^{\alpha /(\alpha - 1)}}\mathcal{K}(\alpha,\varphi,\theta)} \right]\mathcal{K}(\alpha,\varphi,\theta)d\theta } $$

The choice of contour $\Gamma$ is adapted from Zolotaryov's One-dimensional stable distributions p. 74-77. $(**)$ is crucial for me to derive the desired integral.


$(**)$ is valid for $x>0$, replace $x$ by $x^c$ ($c>1$), then apply $\int_0^\infty dx$ both sides (easily justified): $$\Gamma (\frac{1}{c})\Re \int_0^\infty {{{( - it)}^{ - 1/c}}\exp ( - {t^\alpha }{e^{i\varphi }})dt} = \Gamma \left( {\frac{{ - 1 + \alpha + c}}{{\alpha c}}} \right)\int_{ - \varphi /\alpha }^{\pi /2} {{{\mathcal{K}(\alpha,\varphi,\theta)}^{1 - \frac{ - 1 + \alpha + c}{\alpha c}}}d\theta } $$ Let $\beta = (1-c)/(\alpha c)$ (so $\beta < 0$), then LHS of above displayed equation equals $$\tag{1}\Gamma (1 + \alpha \beta )\frac{{\Gamma ( - \beta )}}{\alpha }\cos (\frac{\pi }{2}(1 + \alpha \beta ) + \varphi \beta )$$ while RHS is $$\tag{2} \Gamma \left( {1 - \beta + \alpha \beta} \right) \int_{ - \varphi /\alpha }^{\pi /2} {\frac{{{{ {\sin^{\alpha\beta} (\alpha \theta + \varphi )} }}{{\cos }^{(1 - \alpha )\beta}}((\alpha - 1)\theta + \varphi )}}{{{{\cos^\beta \theta }}}}d\theta } $$ We proved the equality $(1) = (2)$ under the following hypotheses:

  • $0<\alpha<1, \beta<0, \alpha\beta>-1$
  • $-\pi/2<\varphi<\pi/2$ such that all exponentials in $(2)$ has positive base throughout range of integration.

In particular, this is true for $\varphi = \pi \alpha/2$, $$\begin{aligned}&\quad \int_{ - \pi /2}^{\pi /2} {\frac{{{{\sin }^{\alpha \beta }}(\alpha \theta + \frac{\pi }{2}\alpha ){{\cos }^{(1 - \alpha )\beta }}((\alpha - 1)\theta + \frac{\pi }{2}\alpha )}}{{{{(\cos \theta )}^\beta }}}d\theta } = \int_0^\pi {\frac{{{{\sin }^{\alpha \beta }}\alpha \theta {{\sin }^{(1 - \alpha )\beta }}((1 - \alpha )\theta ))}}{{{{\sin^\beta \theta } }}}d\theta } \\ & = \frac{{\Gamma (1 + \alpha \beta )}}{{\Gamma (1 - \beta + \alpha \beta )}}\frac{{\Gamma ( - \beta )}}{\alpha }\cos (\frac{\pi }{2} + \pi \alpha \beta )\end{aligned}$$ completing the proof.

pisco
  • 19,748
  • Thanks for this interesting proof. Could you please give more explanations about how did you get your range of $\theta$? We require $\sin$ and $\cos$ to be positive at the same time in Eq.(*) and we still need to consider $\varphi$ changes between $-\frac{1}{2}\pi$ and $\frac{1}{2}\pi$. Also, since the right hand side of my question is the reciprocal beta function, can we solve this within the real domain? Thanks in advance. – gouwangzhangdong Oct 15 '20 at 00:03
  • 1
    @gouwangzhangdong For fixed $1<\alpha<2$, we can *choose* $\varphi \in (-\pi/2,\pi/2)$ to make $()$ positive for every $\theta \in (-\varphi/\alpha, \pi/2)$. I do not* require $()$ to be positive for every* $\varphi \in (-\pi/2,\pi/2)$. Regarding your second question, the solution here is the only valid one I know. I tried various techniques, elementary or advanced, none of them successful except this one. – pisco Oct 16 '20 at 12:58
  • Thanks. You are very kind. I am still learning your proof. First, what inspires you to set $\theta \in (-\frac{\varphi}{\alpha}, \frac{\pi}{2})$? Is it from the book you mentioned? Second, once you have the range of $\theta$, you claim "this forces $\alpha \in (0,2)$". How can I see this? Sorry if I am being stupid. By the way, how can I do @ function as you did? – gouwangzhangdong Oct 18 '20 at 01:20
  • @gouwangzhangdong $\theta \in (-\frac{\varphi}{\alpha}, \frac{\pi}{2})$ is to make the resulting contour goes from $0$ to $\infty$. I think you can ignore "this forces $\alpha\in (0,2)$", and just remember $1<\alpha<2$. Finally, you can do that by typing @ at the start of a comment. – pisco Oct 18 '20 at 03:40
  • Hey Pisco, thanks for your help once again. I am still attempting to understand your proof. I am fine with all the calculations while getting stuck at ranges. You start with $\alpha \in (1,2)$, how does that influence your calculations? Meanwhile, How can you expand $\beta<0$ and $\alpha \in (1,2)$ to $\Re (\beta)<1$ and $\alpha \in (0,1)$ later? For $\beta \in (0,1)$, $\Gamma(-\beta)$ is complex infinity and $\Gamma(1-\beta+\alpha \beta)$ too. Finally, why $-\frac{ \varphi}{\alpha}$ is replaced by $\frac{\pi}{2}$? These should be my last questions on your proof. Cannot say thank you enough. – gouwangzhangdong Oct 27 '20 at 10:51
  • @gouwangzhangdong After some thoughts, I found it's actually more convenient to assume $0<\alpha<1$, then you don't need to appeal to analytic continuation. My approach of considering $1<\alpha<2$ is also sound, albeit circuitous. Btw, For $\beta\in (0,1)$, $\Gamma(-\beta)$ is not complex infinity. – pisco Oct 27 '20 at 13:15
  • Yes. Now I can understand your proof perfectly. Sorry for my late reply, I broke my leg, so it took me a while to reply. I met another related integral and edited in the question too. Looking for your advice. – gouwangzhangdong Nov 13 '20 at 01:42
  • @gouwangzhangdong I am sorry to hear that, hope you recover soon. The added identity should follow from Fourier inversion applied to $(**)$. – pisco Nov 14 '20 at 09:49
  • Sorry, I made a terrible mistake. I have rewritten my question now. – gouwangzhangdong Nov 24 '20 at 12:54