0

Let $y_1,\dots,y_n$ be a set of i.i.d random variables with mean $\mu$ and variance $\sigma^2$. Can it be shown that $$ \begin{align} \sum_{i=1}^n y_i = O_p(\sqrt{n}), & \quad \quad \text{if} \ E[y_i] = 0, \\ \sum_{i=1}^n y_i = O_p(n), & \quad \quad \text{if} \ E[y_i] \neq 0. \end{align} $$ using the definition of convergence in probability for Big $O_p$ notation?

I saw a simplified explanation for the results in page 4 of these notes but I'm wondering can it be done directly using the the definition of $O_p$, i.e. $X_n = O_p(n)$ if for all $\varepsilon > 0$, there exists $M,N>0$ such that $P(|X_n/n|>M) < \varepsilon$ for all $n > N$.

sonicboom
  • 10,273
  • 15
  • 54
  • 87

1 Answers1

1

The CLT says $$ n^{1/2}\bigg(\frac{1}{n} \sum_{i=1}^n y_i - \mu\bigg) \stackrel{d}{\to} N(0,\sigma^2). $$ Since the term on the left hand side converges in distribution it is bounded in probability: $$ n^{1/2}\bigg(\frac{1}{n} \sum_{i=1}^n y_i - \mu\bigg) = O_p(1) $$ which menas $$ \begin{align} \frac{1}{n} \sum_{i=1}^n y_i = \mu + O_p(n^{-1/2}) & \Longleftrightarrow \sum_{i=1}^n y_i = n\mu + O_p(n^{1/2}). \end{align} $$ Now if $E[y_i] = 0$ we get the rate $O_p(n^{1/2})$. Otherwise $\mu = O_p(1)$ since it is a 'constant random variable' and thus $$ \begin{align} \sum_{i=1}^n y_i = nO_p(1) + O_p(n^{1/2}) = O_p(n). \end{align} $$

sonicboom
  • 10,273
  • 15
  • 54
  • 87