1

I have a minor technical issue. Let's say $Y = \sum^{n}_{i=1} X_{i}$. Now I want to find $P(Y > \gamma)$ by Monte Carlo. Let's assume the $X_{i}$ are i.i.d. Gamma distributed. How I see the solution to this problem is the following two cases:

Case 1:

  • Generate $n$ random variables: $X_{i} \sim \mathrm{Gamma}(k,\theta)$.
  • Check $X_{i} > \gamma$ for each $X_{i}$.
  • Take the mean of the result from bullet point 2.

Case 2:

  • Generate $m$ random variables $Y \sim \mathrm{Gamma}(n \cdot k,\theta)$.
  • Take the mean of the result from the above bullet point.

What would be the right approach?

CasMath
  • 47
  • Did you mean in Case 1 to check whether $\sum X_i > \gamma$ instead of checking that $X_i > \gamma$? – Aaron Montgomery Oct 19 '22 at 20:43
  • I initially meat to check $X_{i} > \gamma$ for each $i$ and then taking the mean. But looking at it again, it does, however, not make too much sense to do so. – CasMath Oct 21 '22 at 19:19

1 Answers1

1

Recall that Monte Carlo methods are essentially based on the Law of Large numbers. So let's say that you want to approximate:

$$\mathbb{P}(Y > \gamma) $$

Then assuming that you can generate $m$ i.i.d. samples from the distribution of $Y$ you obtain:

$$ \frac{1}{m} \sum_{i=1}^m \mathbb{1}_{\{Y_i > \gamma\}} \stackrel{m \rightarrow \infty}{\rightarrow} \mathbb{E}[\mathbb{1}_{\{Y > \gamma\}}] = \mathbb{P}(Y > \gamma)$$

Where $\mathbb{1}$ is the indicator function. Now, as you pointed out, you are actually able to generate samples $Y_i$ with the method that you proposed or you could just use the following property of the gammas: Sum of independent Gamma distributions is a Gamma distribution. Therefore part 1) of your method is not needed unless you plan to sample the $Y_i$'s as sum of $n$ sampled $X_i$'s. But in general it is not needed to use both part 1) and 2) of your algorithm, part 2) suffices.

Good luck! If you find the answer useful, please leave a positive feedback or accept the answer! Thank you!