3

The Statement of the Problem:

For a Poisson model $\{\text{Pois}(\lambda): \lambda \in (0, \infty) \}$ show that the sample mean $\overline X$ is an unbiased estimator of $\lambda$.

What I Did:

I know this is really basic, but I just want to make sure I went about it correctly. It's basically just applying definitions with no real need for "cleverness," but I'm not too confident in this stuff. So if the rigor is inadequate, or the steps unclear, or it's just plain wrong, please let me know.

So, in general, I want to show the following:

$$ E[\hat \theta]=\theta. $$

In this case, I want to show the following:

$$ E[\overline X]=\lambda. $$

Right? Ok, here I go:

$$ E[ \overline X] = E\left[\frac{1}{n} \sum_{i=1}^n X_i \right] = \frac{1}{n} \cdot E\left[\sum_{i=1}^n X_i \right] = \frac{1}{n}\left[\sum_{i=1}^n \lambda_i \right]= \frac{1}{n} \cdot n\lambda = \lambda.$$

Q.E.D.

That's it, right?

  • 2
    that is right ! – Dave Nguyen Jul 16 '15 at 19:33
  • 1
    That's complete, unless you are expected to prove that the mean of a Poisson with parameter $\lambda$ is $\lambda$. – André Nicolas Jul 16 '15 at 19:35
  • @AndréNicolas Oh, boy... I hope not! How would I go about doing that, out of curiosity? Using the MGF of Poisson? – thisisourconcerndude Jul 16 '15 at 19:36
  • 1
    You could. Easier is that it is $\sum_1^\infty ne^{-\lambda}\frac{\lambda^n}{n!}$. The $\frac{n}{n!}$ becomes $\frac{1}{(n-1)!}$. Take out a $\lambda$. We get $\lambda e^{-\lambda}\sum_1^\infty \frac{\lambda^{n-1}}{(n-1)!}$. The remaining sum is $e^{\lambda}$. – André Nicolas Jul 16 '15 at 19:46

1 Answers1

4

$$ E[ \overline X] = \overbrace{E\left[\frac{1}{n} \sum_{i=1}^n X_i \right] = \frac{1}{n} \cdot E\left[\sum_{i=1}^n X_i \right]}^{\text{Linearity of expectations is used here.}} = \frac{1}{n}\left[\sum_{i=1}^n \lambda_i \right]= \frac{1}{n} \cdot n\lambda = \lambda. $$ $$ E[ \overline X] = E\left[\frac{1}{n} \sum_{i=1}^n X_i \right] = \underbrace{\frac{1}{n} \cdot E\left[\sum_{i=1}^n X_i \right] = \frac{1}{n}\left[\sum_{i=1}^n \lambda_i \right]}_{\text{I would add one more step here.}} = \frac{1}{n} \cdot n\lambda = \lambda. $$ $$ \underbrace{\frac{1}{n} \cdot E\left[\sum_{i=1}^n X_i \right] = \frac 1 n \sum_{i=1}^n E[X_i]}_{\text{Linearity of expectations is used here.}} = \frac{1}{n}\left[\sum_{i=1}^n \lambda_i \right] $$ Linearity really means two things: (1) You can pull out constants (and in situations like this "constant" means not random) and (2) You can pull out sums. This way of writing the proof is explicit about where linearity is used. I would be explicit about that because that's about 99% of what this particular proof is about.