1

I came across the following probability question. Let $(X_n)_{n=1}^{\infty}$ be a sequence of random variables, and let $S \subset \mathbb{R}$ be a measurable set. Assume that for each $y \in S$, we have that:

$$X_ny \xrightarrow[n \to \infty]{} 0 \quad \text{ in probability }.$$

Let $Y$ be a random variable, taking values in $S$, and independent of $X_n$ for all $n$. I want to show that:

$$X_nY \xrightarrow[n \to \infty]{} 0 \quad \text{ in probability }.$$

I have a proof when $S$ is countable: for each $\varepsilon > 0$ we have that:

$$\mathbb{P}(|X_n Y| > \varepsilon) = \sum_{y \in S}\mathbb{P}(|X_n Y| > \varepsilon | Y=y) \mathbb{P}(Y=y) = \sum_{y \in S}\mathbb{P}(|X_n y| > \varepsilon) \mathbb{P}(Y=y).$$

The result follows by taking the limit as $n \to \infty$ and applying DCT. I think that works, but how I don't know how to generalize this. In particular, I don't want to assume $Y$ is discrete nor it has a density.

2 Answers2

4

The general case would be the same but written in a measure theoratic language.

Let $\mu_{X_{n}},\mu_{Y}$ be the distribution of $X_{n}$ and $Y$ on $\Bbb{R}$. Then by assumption of independence, the Joint distribution of $(X_{n},Y)$ on $\Bbb{R}^{2}$ is the product measure $\mu_{X_{n}}\otimes \mu_{Y}$ .

Thus \begin{align}P(|X_{n}Y|>\epsilon)&=\int_{\Bbb{R}}\int_{\Bbb{R}}\mathbf{1}_{|xy|>\epsilon}\,d\mu_{X_{n}}(x)\,d\mu_{Y}(y)\\\\&=\int_{\Bbb{R}}P(|X_{n}y|>\epsilon)\,d\mu_{Y}(y) \end{align}

Now as $P(|X_{n}y|>\epsilon)\to 0$ as $n\to\infty$ and $|P(|X_{n}y|>\epsilon)|\leq 1$ and $\int_{\Bbb{R}}1\,d\mu_{Y}=1<\infty$, the Dominated Convergence Theorem gives you that $\int_{\Bbb{R}}P(|X_{n}y|>\epsilon)\,d\mu_{Y}(y)=P(|X_{n}Y|>\epsilon)\to 0$ as $n\to\infty$

Alternate Also, note that $X_{n}y\to 0$ in P for some non zero $y$ means that $X_{n}\to 0$ in probability.

So find a subsequence such that $X_{n_{k}}\to 0$ almost surely.

Thus for each fixed $\omega$ in a set of probability $1$, you'll have that $X_{n_{k}}(\omega)Y(\omega)=X_{n_{k}}(\omega)y\to 0$ as $k\to\infty$.

Thus, along the subsequence, $X_{n_{k}}Y\to 0$ almost surely (and hence also in P). Thus given any subsequence $X_{n_{k}}Y$ following the above method, we'll be able to find a further subsequence such that $X_{n_{k_{l}}}Y\to 0$ almost surely. This is sufficient for convergence in probability. This is because

A sequence $X_{n}\to X$ in P iff for every subsequence $X_{n_{k}}$ there exists a further subsequence $X_{n_{k_{l}}}$ such that $X_{n_{k_{l}}}\to X$ in probability.

2

An alternative probabilistic way is the following.

Fix any $\epsilon>0$. We will show that for any $\delta>0$ there is an $N$ scuh that $$ \mathbb{P}(|X_n Y|\ge \epsilon)\le \delta\quad\text{for all }n\ge N. $$

Let $A_M=\{\omega: |Y(\omega)|\ge M\}$, $M=1,2,\dots$. This is a decreasing sequence of events converging to $\emptyset$ hence by the continuity of probability there is some $M_0$ such that $\mathbb{P}(|Y|\ge M_0)\le \delta/2$.

We may suppose that $S\ne \{0\}$ (otherwise it's trivial). Fix some $y\in S$ such that $y\ne 0$.

Since $X_n y\to 0$ in probability, there is an $N$ such that $$ \mathbb{P}(|X_n y|\ge |y|\epsilon/M_0)\le \delta/2 $$ for all $n\ge N$, i.e. $\mathbb{P}(|X_n M_0|\ge \epsilon)\le \delta/2$.

Then $$ \mathbb{P}(|X_nY|\ge \epsilon)\le \mathbb{P}(|X_n M_0|\ge \epsilon) +\mathbb{P}(|Y|\ge M_0)\le \delta/2+\delta/2=\delta $$ for all $n\ge N$. Q.E.D.

van der Wolf
  • 5,743
  • 4
  • 13