0

I'm having trouble approaching this question with the differences I noted. In an exponential distribution with mean $\frac{1}{\theta}$, I would go about finding the UMVUE like so: \begin{align*} L(\theta\vert x) &= \prod_i \frac{1}{\theta} e^{-\frac{x}{\theta}}\\ &= \frac{1}{\theta^n} e^{-\frac{\sum x_i}{\theta}} \end{align*}

$$\ell = -n\ ln(\theta) - \frac{\sum x_i}{\theta} $$

\begin{align*} \frac{\partial \ell}{\partial \theta} &= -\frac{n}{\theta} + \frac{\sum x_i}{\theta^2}\\ &= \frac{\sum x_i - n\theta}{\theta^2}\\ &= n \frac{\bar{x} - \theta}{\theta^2} \end{align*}

This tells me that $\bar{X}$ is the the UMVUE because Casella and Berger notes: "Any function of $\sum x_i$ that is an unbiased estimator of $\theta$ is the best unbiased estimator."

Now in this case, my mean is $\theta$ which is raising a few questions from me. First, how would you go about finding the UMVUE? Would it involve the Lehmann–Scheffé theorem? My professor sort of glazed over the topic so I'm not really sure how it works. Also, I'm told this UMVUE will have a variance strictly larger than its CRLB, what's the reasoning behind this?

  • The way you have it parameterized, the mean is $\theta$ and the variance is $\theta^2$. Yes, $\bar{X}$ is a sufficient statistic for $\theta$ and it is unbiased. Therefore it is the UMVUE by the Lehmann-Scheffe theorem. The variance of $\bar{X}$ is $\frac{\theta^2}{n}$. The Cramer-Rao lower bound is $n E\left[ \left(\frac{\sum{X_i}-n \theta}{\theta^2} \right)^2 \right]$. Since $\sum{X_i}$ has a Gamma distribution, you can find the Cramer-Rao lower bound and compare it to the variance of $\bar{X}$. – John L Mar 08 '21 at 23:10
  • 1
    https://math.stackexchange.com/q/2034206/321264, https://math.stackexchange.com/q/2819978/321264 – StubbornAtom Mar 09 '21 at 04:17

0 Answers0