I am trying to estimate the accuracy of a set of Monte Carlo simulation where my result is
\begin{equation} C=1-\frac{P_X(1)}{(P_Y(1))^2} \end{equation}
and $X$ and $Y$ are the results of two separate experiments, so they are independent. In particular, both simulations only outputs zeros or ones so what I am measuring is the probability of getting a 1 in each of the two indipendently. To give an example, it's as if as you have an unfair coin and you are trying to determine its balancing by throwing it many times.
I already know that the conjugate prior of $X$ and $Y$ is a beta distribution. I am trying to compute how accurate my results can be given a sample of size $N$, and I would be satisfied if I could find analytically the variance $\text{Var}(C)$.
I started by simplifying a bit and assume beta symmetric, which yields $\text{Var}(X) = \frac{\mu (1-\mu)}{1 + N_r}$ where $\mu$ is mean and $N_r$ is the size of the sample. Similarly, $\text{Var}(Y^2)$ itself shouldn't be a problem. If it was by itself, considering that the sample size is large, I could approximate it with a Gaussian and compute the variance of its square as in Mean and variance of Squared Gaussian: $Y=X^2$ where: $X\sim\mathcal{N}(0,\sigma^2)$? .
However, I already know the variance and average of the inverse of a normal do not exist, so how do you compute the variance on this ratio? Also, if you think that trying to compute the variance is not the right approach, feel free to suggest something better.
In the same exact way but for anther coin you measure P(1). That would be simulation Y.
– Enrico Nov 23 '20 at 18:53