2

If r.v. $X\sim \operatorname{Gamma}(a+b,\lambda)$ and $Y\sim \operatorname{Beta}(a,b)$ are independent, their product $XY\sim \operatorname{Gamma}(a,\lambda)$

I am not asking how to prove this, but how to interpret this result. Maybe a physical situation that implies the result?


For clarity and "proof of work", I will give two ways of proving this.

Proof 1 - Change of variables

Let $U=XY$ and $V=X$, the Jacobian $|J|=1/V$.

$Y=U/V\leq1 \Rightarrow V\geq U$.

$f_U(u)=\int_u^{+\infty}f_{U,V}(u,v)dv=\int_u^{+\infty}f_{X,Y}(v,u/v)|J|dv=\int_u^{+\infty}e^{-\lambda v}u^{a-1}(v-u)^{b-1}dv$

Let $t=u-v$, $f_U(u)=\int_0^{+\infty}t^{b-1}e^{-\lambda t}u^{a-1}e^{-\lambda u}dt=u^{a-1}e^{-\lambda u}$.

Therefore, $U\sim \operatorname{Gamma}(a,\lambda)$.

Proof 2

A well-known relation between Gamma distribution and Beta distribution is as follows

If r.v. $X\sim \operatorname{Gamma}(a,\lambda)$ and $Y\sim \operatorname{Gamma}(b,λ)$ are independent, then $U:=X+Y\sim \operatorname{Gamma}(a+b,λ)$ and $V:=\frac{X}{X+Y}\sim \operatorname{Beta}(a,b)$ and they are independent.

The proof for this is again using the change of varibles and can be found in the other question.

Using this fact, we immediately get that $UV=X\sim \operatorname{Gamma}(a,\lambda)$.

Robin
  • 6,201
John Ao
  • 133
  • In your title you have $X \perp Y$ which is not found in your text. Does this notation $ \perp $ means independence ? – Jean Marie Apr 22 '25 at 20:05
  • @JeanMarie Yes, the notation ⊥ means independence. I have editted the question and removed this notion for clarity. – John Ao Apr 23 '25 at 02:39

2 Answers2

1

I think the intuition is a little easier to understand if you write this in a slightly different form recalling the well known result that if $X \sim $ Gamma$(\alpha, \lambda)$ and $Y \sim$ Gamma$(\beta, \lambda)$ are independent, then $X+Y \sim$ Gamma$(\alpha+\beta, \lambda)$. The reformulation of your result is as follows (see Wikipedia):

If $X \sim $ Gamma$(\alpha, \lambda)$ and $Y \sim$ Gamma$(\beta, \lambda)$ are independent, then $\frac{X}{X + Y} \sim$ Beta$(\alpha, \beta)$.

The intuition for this latter result is that the beta distribution suggests itself naturally because $\frac{X}{X + Y}$ lies between 0 and 1. The shape parameters are also plausible because the mean of the Beta distribution is $\frac{\alpha}{\alpha+\beta}$. What I am saying is that if a mathematician who does not know this specific result were asked to guess the distribution of $\frac{X}{X + Y}$, a natural conjecture would be Beta$(\alpha, \beta)$. Having arrived at this conjecture, that mathematician would then proceed to prove it using the change of variables that you have described.

The intuition could be strengthened by considering special cases especially the very simple case $\alpha=\beta=1$ where the Gamma becomes the exponential and the Beta becomes uniform.

  • This method is mentioned in my proof 2. I think this is kind of a "backward" way and I wonder if there is some "forward" way like using "Beta(a,b)" to distribute "Gamma(a+b,λ)" into "Gamma(a,λ)"? – John Ao Apr 23 '25 at 14:49
1

One way to see the Gamma distibution $Gamma(a,\lambda)$ is the time needed for a Poisson process to happen $a$ times. The Beta distribition $p \sim Beta(a,b)$ instead can be interpreted as how much likely is a value of $p$ from a binomial process if we have seen $a$ successes out of $a+b$ attempts.

So, in this case, if we have how much time are needed to have $a+b$ occurrences and we select them randomly according to a $p$ probability that comes exactly from the beta above, is equivalent to have the time for just $a$ occurrences.

nicola
  • 813