1

I find the theorem below on https://math.stackexchange.com/a/1366549/533565. But I cannot find it at neither the textbook I have nor the published paper in google scholar. I want to cite this theorem in my project. Does anybody know where is the theorem from?

A theorem by Markov states that if a sequence of random variables $X_1, X_2, \ldots$ with finite variances fulfills one of conditions:

  • $\lim_{n \to \infty} \frac{\mathrm{Var} X_n}{n^2} = 0$;
  • $X_1, X_2, \ldots$ are independent and $\lim_{n \to \infty}\frac{1}{n^2}\sum_{i = 0}^n \mathrm{Var} X_i = 0$;

then the sequence $Y_n = \frac{1}{n}\sum_{i=1}^n (X_i - \mathsf{E} X_i)$ converges for $n \to \infty$ to $0$ in probability.

StubbornAtom
  • 17,932
Mizzle
  • 165

1 Answers1

2

This is false. The first condition is insufficient for convergence. Even if the first condition is augmented by assuming the $\{X_i\}_{i=1}^{\infty}$ variables are mutually independent, it is still insufficient. However, the second condition is sufficient.

Define $Y_n = \frac{1}{n}\sum_{i=1}^n (X_i-E[X_i])$.

Counter-example to first condition.

Let $\{X_n\}_{n=1}^{\infty}$ be mutually independent Gaussian random variables such that $X_n \sim N(0, n)$ for $n \in \{1, 2, 3, ...\}$. Then the first condition holds because $$ \frac{Var(X_n)}{n^2} = \frac{n}{n^2}\rightarrow 0$$ However, $Y_n$ is Gaussian with mean zero and variance $\frac{1}{n^2}\sum_{i=1}^n i$. That is, $$Y_n \sim N\left(0, \frac{n+1}{2n}\right)$$ Thus, $Y_n$ does not converge to zero in probability. The limiting distribution of $Y_n$ is $N(0,1/2)$.

Proof of convergence under the second condition.

We just use the standard Markov/Chebyshev inequality. Fix $\epsilon>0$. Then $$ P[|Y_n-0|\geq \epsilon] = P[Y_n^2 \geq \epsilon^2] \leq ...$$

Michael
  • 26,378
  • Thank you for your reply. I read the textbook Probability: Theory and Examples. 5th Edition by Durrett. Here is theorem 2.2.6 from the textbook. Let $\mu_n=ES_n$, $\sigma_n^2=Var(S_n)$. If $\sigma_n^2/b_n^2 \to 0$, then $(S_n-\mu_n)/b_n \to 0$ in probability. $S_n$ can be any sequence of random variables. Now let $S_n=\sum_{i=1}^nX_i$. Assume $\sigma_n^2/n^2 \to 0$. Then the sequence $Y_n = \frac{1}{n}\sum_{i=1}^n (X_i - \mathsf{E} X_i)$ converges for $n \to \infty$ to $0$ in probability by theorem2.2.6. Is this correct? – Mizzle Jan 07 '21 at 22:33
  • @Mizzle : The way you are defining $\sigma_n^2$ is $\sigma_n^2 = \sum_{i=1}^n Var(X_i) + \sum_{i\neq j} Cov(X_i,X_j)$, which does not relate to the question you asked, at least not in the first condition. Nevertheless if you assume $\sigma_n^2/n^2\rightarrow 0$ then you can apply the theorem with $b_n=n$. – Michael Jan 08 '21 at 01:01
  • @Mizzle : However, I suggest that you ignore Theorem 2.2.6: There is little value in memorizing or using it since its proof is essentially a 1-2 line application of the Markov/Chebyshev inequality. It is more important that you know how to use the Markov/Chebyshev inequality. For example, you should know how to use it to finish the proof of convergence under the second condition in my answer above. – Michael Jan 08 '21 at 01:03