0

I have the following question:

Show that if $(Y_n)$ be a sequence of random variables that satisfies $(\sqrt{n}(Y_n - \theta) \overset{d}{\to} N(0, 1))$ then $(Y_n \overset{P}{\to} \theta)$.

I've proceeded as follows but I'm not sure if I'm formally correct:


First, I start with the given condition: $$ \sqrt{n}(Y_n - \theta) \overset{d}{\to} N(0, 1) $$

I can divide both sides by $(\sqrt{n})$. Using the Continuous Mapping Theorem, which states that if $ (X_n \overset{d}{\to} X)$ and $(g)$ is a continuous function, then $(g(X_n) \overset{d}{\to} g(X)$, I get: $$ Y_n - \theta = \frac{1}{\sqrt{n}} \sqrt{n}(Y_n - \theta) $$ Therefore, $$ Y_n - \theta \overset{d}{\to} \frac{Z}{\sqrt{n}} \quad \text{where} \quad Z \sim N(0, 1) $$

Next, considering the asymptotic behavior, since $(Z \sim N(0, 1))$, I know that: $$ \frac{Z}{\sqrt{n}} \overset{d}{\to} 0 \quad \text{as} \quad n \to \infty $$

This implies: $$ Y_n - \theta \overset{d}{\to} 0 $$ and hence: $$ Y_n \overset{d}{\to} \theta $$

Finally, I use the result that if a sequence of random variables $(X_n \overset{d}{\to} c)$ where $c$ is a constant, then $(X_n \overset{P}{\to} c)$. Therefore: $$ Y_n \overset{d}{\to} \theta \implies Y_n \overset{P}{\to} \theta $$

Thus, I have shown that $$Y_n \overset{P}{\to} \theta$$


I've doubts about this step:$$ \frac{Z}{\sqrt{n}} \overset{d}{\to} 0 \quad \text{as} \quad n \to \infty $$ I might be supposed to use Slutsky's Theorem around here. I couldn't make up my mind. I appreciate the help.

Flaw of the above Argument: It turns out I can't use Continuous Mapping Theorem if $g$ is a function of n.

2 Answers2

2

According to Slutsky's theorem, if $Z_n \xrightarrow{d} Z$ and $a_n \xrightarrow{p} a$, then $a_n Z_n \xrightarrow{d} a Z$. Here, $Z_n=\sqrt{n}\left(Y_n-\theta\right)$ and $Z \sim N(0,1)$, and $\frac{1}{\sqrt{n}} \xrightarrow{p} 0$. We consider: $$ Y_n-\theta=\frac{1}{\sqrt{n}} \cdot \sqrt{n}\left(Y_n-\theta\right) $$

Since $\sqrt{n}\left(Y_n-\theta\right) \xrightarrow{d} N(0,1)$ and $\frac{1}{\sqrt{n}} \xrightarrow{p} 0$, it follows that: $$ Y_n-\theta \xrightarrow{p} 0 . $$

Convergence in probability implies that for any $\epsilon>0$, $$ \lim _{n \rightarrow \infty} P\left(\left|Y_n-\theta\right| \geq \epsilon\right)=0 . $$

Thus $$ Y_n \xrightarrow{p} \theta. $$

bruno
  • 491
  • Wouldn't the theorem conclude that $Y_n-\theta \xrightarrow{d} 0$, instead of $Y_n-\theta \xrightarrow{p} 0$? Then we know it converges to a constant (zero in our case). Thus we conclude if $Y_n-\theta \xrightarrow{d} 0$,then $Y_n-\theta \xrightarrow{p} 0$. – ZedVeZed May 28 '24 at 14:02
  • 1
    \begin{equation} \text { If } X_n \xrightarrow{d} c \text {, where } c \text { is a constant, then } X_n \xrightarrow{p} c \text {. } \end{equation} @ZedVeZed – bruno May 28 '24 at 14:13
0

Fix $\epsilon>0$. Let $K>0$. Notice that for large enough $n$ you have $\sqrt{n}\epsilon>K$. Therefore $$ \eqalign{ \limsup_nP\left(|Y_n-\theta|>\epsilon\right) &=\limsup_nP\left(\sqrt{n}|Y_n-\theta|>\sqrt{n}\epsilon\right)\cr &\le \limsup_nP\left(\sqrt{n}|Y_n-\theta|>K\right)\cr &=2(1-\Phi(K)),} $$ where $\Phi$ is the standard normal cdf. The final equality follows because $\sqrt{n}(Y_n-\theta)$ converges in distribution to the standard normal. As $K\to+\infty$, the upper bound above converges to $0$. Thus $$ \limsup_nP\left(|Y_n-\theta|>\epsilon\right)=0, $$ for each $\epsilon>0$. This shows that $Y_n \overset{P}{\to} \theta$.

John Dawkins
  • 29,845
  • 1
  • 23
  • 39