1

I am trying to understand the difference between the UMVUE (uniformly minimum-variance unbiased estimator, also known as minimum-variance unbiased estimator (MVUE)) and the MVBE (minimum variance bound estimator).

There seems to be a lot of writing on the UMVUE, but not so much on the MVBE. What I have found that discusses this exact topic:

  • This, which seems to indicate that a MVBE would also be a UMVUE (as the variance of a MVBE is smaller than the UMVUE).
  • And this (see page 15), which also says that a MVBE is again the UMVUE.

However, I'm still unsure of the fundamental difference between the two.

  • The MVBE is unbiased and attains (meaning it equals) the lower bound of the Cramer-Rao inequality (again from page 15 of that second source)
  • "an unbiased estimator which achieves this [Cramer-Rao] lower bound is said to be (fully) efficient. Such a solution achieves the lowest possible mean squared error among all unbiased methods, and is therefore the [UMVUE]" (source).

Are these not both the same thing?

Ruby Pa
  • 359
  • Difference is that UMVUE need not attain Cramer-Rao bound. – StubbornAtom May 14 '22 at 05:19
  • @StubbornAtom So if T attains the CR bound, then T is the UMVUE. But, if T is the UMVUE, then it doesn't necessarily attain the CR bound. Am I understanding this relationship correctly? Furthermore, since a MVBE must equal the CR bound, it must also be the UMVUE? – Ruby Pa May 14 '22 at 05:31

1 Answers1

2

I take MVBE to mean an unbiased estimator whose variance attains the Cramer-Rao bound.

Therefore, if an MVBE exists, it is always the UMVUE. But the converse is not true because variance of UMVUE does not necessarily attain the Cramer-Rao bound.

For a concrete example, consider $X_1,X_2,\ldots,X_n$ i.i.d Exponential with rate $\theta$. The joint pdf is

$$f_{\theta}(\boldsymbol x)=\theta^n \exp\left(-\theta\sum_{i=1}^n x_i\right)\mathbf1_{x_1,\ldots,x_n>0} \quad,\,\theta>0$$

Therefore, $$\frac{\partial}{\partial\theta}\ln f_{\theta}(\boldsymbol x)=\frac{n}{\theta}-\sum_{i=1}^n x_i=-n\left(\overline x_n - \frac1{\theta}\right) \tag{$\star$}$$

Now $(\star)$ is in the form of the equality condition of Cramer-Rao inequality, so variance of the sample mean $\overline X_n$ attains the Cramer-Rao lower bound for $1/\theta$. Moreover, $\overline X_n$ is unbiased for $1/\theta$. Therefore, $\overline X_n$ is an MVBE as well as the UMVUE of $1/\theta$.

But there is no MVBE of $\theta$, because only functions of the form $k/\theta$ admit estimators whose variance attains the Cramer-Rao bound. This is clear from $(\star)$. The UMVUE of $\theta$ exists regardless, and is given by $\hat\theta=\frac{n-1}{\sum_{i=1}^n X_i}$ for $n>1$. As an exercise, using the distribution of $\sum_{i=1}^n X_i$, one can show that variance of $\hat\theta$ exceeds the Cramer-Rao bound for $\theta$:

$$\operatorname{Var}_{\theta}(\hat\theta)=\frac{\theta^2}{n-2}>\frac{\theta^2}{n}=\text{CRLB}(\theta)\quad,\,n>2$$

StubbornAtom
  • 17,932
  • May I ask why, for an iid $\exp(\theta)$ sample, you're using the sample mean as the statistic? Isn't the sufficient statistic $\sum^n_{i=1} X_i$? Also, should $(\star)$ be the score instead (so the partial derivative of it once again)? – Ruby Pa May 14 '22 at 07:47
  • Sample mean is sufficient because sample total is sufficient. And $(\star)$ is the score itself. – StubbornAtom May 14 '22 at 08:06