6

In a previous post I asked help to clarify a property of stable convergence in distribution:

Definition

Let $X_n$ be a sequence of random variables defined on a probability space $(\Omega,\mathcal{F},\mathbb{P})$ with value in $\mathbb{R}^N$. We say that the sequence $X_n$ converges stably in distribution with limit $X$, written $X_n\stackrel{\text{st}}{\longrightarrow} X$, if and only if, for any bounded continuous function $f:\mathbb{R}^N\to\mathbb{R}$ and for any $\mathcal{F}$-measurable bounded random variable $W$, it happens that: $$ \lim_{n\rightarrow \infty}\mathbb{E}[f(X_n)\,W]=\mathbb{E}[f(X)\,W]. $$

What I need to prove now is the following:

Assume $$ (Y_n,Z)\stackrel{\text{d}}{\longrightarrow}(Y,Z), $$

for all measurable random variable $Z$, then

$$ (Y_n,Z)\stackrel{\text{st}}{\longrightarrow}(Y,Z) $$ for all measurable random variables $Z$. So I need to prove that, for any bounded continuous function $f$ and for any measurable $Z$ it holds that $$ \lim_{n\rightarrow \infty}\mathbb{E}[f(Y_n,Z)\,W]=\mathbb{E}[f(Y,Z)\,W] $$ for all bounded random variables $W$.

I tried unsuccessfully with Portmanteau and Levy continuity theorem…

=================================================================

In practice I am trying to prove this proposition from the paper by Podolskij and Vetter:

I did this reasoning for (1)=>(3), but I am not so sure of its correctness.

Ѕᴀᴀᴅ
  • 35,369
  • 1
    You will certainly need some assumption on the integrability of $W$; if $W$ is not integrable, then the expectations are not even well-defined. It seems to me that the paper assumes that $W$ is bounded (in the sense that $|W|_{L^{\infty}} < \infty$), and this simplifies the proof a lot. – saz Apr 13 '18 at 14:23
  • 1
    How did you face the problem with the Portmanteau? The idea can be to take as particular $Z$ the $W$, but probably you will need some additional assumptions about $W$ as noticed by @saz. – Jim Apr 13 '18 at 14:43
  • Yes, sorry, the $W$ must be bounded. – AlmostSureUser Apr 13 '18 at 14:46
  • 1
    In line with the saz comment, let $X$ be any random variable with finite mean but infinite variance, and define $X_n = X/n$. Then $X_n\rightarrow 0$ in distribution but $E[X_nX] = \infty$ for all $n$. – Michael Apr 13 '18 at 14:46
  • @AlmostSureUser Does $\mathbb R^N$ mean $N$-dimensional real space or $\mathbb R^{\mathbb N}$? – Ѕᴀᴀᴅ Apr 16 '18 at 07:58
  • @AlexFrancisco It means the $N$-dimensional real space. – AlmostSureUser Apr 16 '18 at 08:06

2 Answers2

1

What I suggested in the comment was the following idea: by Portmanteau $$ (Y_n,Z)\stackrel{\text{d}}{\longrightarrow}(Y,Z), $$ IFF $$ \lim_{n\rightarrow \infty}\mathbb{E}[f(X_n, \, Z)]=\mathbb{E}[f(X,\,Z)] $$ for any measurable bounded continuous function $f:\mathbb{R}^{N+1}\to\mathbb{R}$.

Then take as particular $Z := W$. Then you should have that $$ (Y_n)\stackrel{\text{st}}{\longrightarrow}(Y). $$ since you can seen $f(Y_n,W)\,W$ for any $f$ C.B. as a particular $f_1(Y_n,W)$ C.B. under the assumption on $W$.

Moreover $$ (Y_n,Z)\stackrel{\text{st}}{\longrightarrow}(Y,Z). $$ should follow from $$ (Y_n)\stackrel{\text{st}}{\longrightarrow}(Y). $$ applying bounded convergence theorem.

Is it right?

Jim
  • 381
  • why bounded convergence? It requires point wise convergence: https://math.stackexchange.com/questions/235511/explanation-of-the-bounded-convergence-theorem – AlmostSureUser Apr 15 '18 at 20:58
  • @AlmostSureUser You are right, I was too sloppy: bounded convergence is not immediate. Your argument seems right to me. Anyway, you can also prove it, as in proposition 2.5 (i) of Podolskij and Vetter paper, taking the sequence $V_n$ that converges in probability to $V$ constant and equal to $Z$. – Jim Apr 16 '18 at 15:37
1

$\def\dto{\xrightarrow{\mathrm{d}}}\def\stto{\xrightarrow{\mathrm{st}}}\def\mto{\xrightarrow{\mathrm{m}}}$$(3) \Rightarrow (2)$: Trivial.

$(2) \Rightarrow (1)$: For any $g \in C_b(\mathbb{R}^N)$ and bounded $\mathscr{F}$-measurable $W$, suppose $|W| \leqslant M$. Take\begin{align*} f: \mathbb{R}^N × \mathbb{R} &\longrightarrow \mathbb{R},\\ (y, z) &\longmapsto g(y) · \frac{1}{2} (|z + M| - |z - M|). \end{align*} Because $(Y_n, W) \dto (Y, W)$ and $f \in C_b(\mathbb{R}^{N + 1})$, then$$ E(g(Y_n) W) = E(f(Y_n, W)) \to E(f(Y, W)) = E(g(Y) W). \quad n \to \infty $$ Therefore, $Y_n \stto Y$.

$(1) \Rightarrow (3)$: Suppose $Z$ and $W$ are $\mathscr{F}$-measurable and $W$ is bounded. First, for any $A \in \mathscr{B}(\mathbb{R}^N)$ and $B \in \mathscr{B}(\mathbb{R})$, there exists $\{g_k\} \subseteq C_b(\mathbb{R}^N)$ such that $g_k \mto I_A$, i.e.$$ m(\{ x \in \mathbb{R}^N \mid g_k(x) \neq I_A(x)\}) \to 0. \quad k \to \infty $$ For any $k \geqslant 1$, because $Y_n \stto Y$ and $I_B(Z) W$ is $\mathscr{F}$-measurable and bounded, then$$ E(g_k(Y_n) I_B(Z) W) \to E(g_k(Y) I_B(Z) W). \quad n \to \infty $$ Note that $g_k \mto I_A$ and $I_B(Z) W$ is bounded, thus$$ E(I_A(Y_n) I_B(Z) W) \to E(I_A(Y) I_B(Z) W). \quad n \to \infty \tag{1} $$

Now, for any $C \in \mathscr{B}(\mathbb{R}^{N + 1})$, there exists $\{A_{k, j}\} \subseteq \mathscr{B}(\mathbb{R}^N)$ and $\{B_{k, j}\} \subseteq \mathscr{B}(\mathbb{R})$ such that $\{h_k\}$ defined by\begin{align*} h_k : \mathbb{R}^N × \mathbb{R} &\longrightarrow \mathbb{R},\\ (y, z) &\longmapsto \sum_{j = 1}^{s_k} I_{A_{k, j}}(y) I_{B_{j, k}}(z) \end{align*} satisfies $h_k \mto I_C$. For any $k \geqslant 1$, from (1) there is$$ E(h_k(Y_n, Z) W) \to E(h_k(Y, Z) W). \quad n \to \infty $$ Because $h_k \mto I_C$ and $W$ is bounded, then$$ E(I_C(Y_n, Z) W) \to E(I_C(Y, Z) W). \quad n \to \infty \tag{2} $$

Now, for any $f \in C_b(\mathbb{R}^{N + 1})$, there exists a sequence of simple functions $\{f_k\}$ such that $f_k \rightrightarrows f$. For any $k \geqslant 1$, from (2) there is$$ E(f_k(Y_n, Z) W) \to E(f_k(Y, Z) W). \quad n \to \infty $$ Because $f_k \rightrightarrows f$ and $W$ is bounded, then$$ E(f(Y_n, Z) W) \to E(f(Y, Z) W). \quad n \to \infty $$ Therefore, $(Y_n, Z) \stto (Y, Z)$.

Ѕᴀᴀᴅ
  • 35,369
  • The proof of $(1)\Rightarrow (3)$ I had in mid was much simpler:

    Assume $$Y_n\stackrel{st}{\rightarrow} Y.$$ Then, by definition,
    $$ E[g(Y_n),W]\to E[g(Y),W] $$ for any bounded continuous function $g(y)$ and for any bounded random variable $W$. Now consider any bounded continuous function $f(y,z)$, an arbitrary $\mathcal{F}$-measurable variable $Z$ and note that $$E[f(Y_n,Z),W]=E[E[f(Y_n,c),W | Z=c]]\rightarrow E[E[f(Y,c),W | Z=c]]= E[f(Y,Z),W].$$

    – AlmostSureUser Apr 16 '18 at 14:29
  • @AlmostSureUser This works only when $Z$ is a continuous or discrete random variable. For $Z$ in general, the conditinal expectation is hard to rigorously characterize. – Ѕᴀᴀᴅ Apr 16 '18 at 14:41
  • I do not understand the meaning of the double arrow. – AlmostSureUser Apr 23 '18 at 07:25
  • @AlmostSureUser It means uniform convergence. – Ѕᴀᴀᴅ Apr 23 '18 at 07:40
  • ok, thank you. Another clarification: I do not see how we pass form the penultimate to the last one. Do we take $k\rightarrow\infty$ ? Is it legitimate? – AlmostSureUser Apr 23 '18 at 07:54
  • 1
    @AlmostSureUser Suppose $M_k=|f_k-f|$ and $|W|\leqslant M$. Because\begin{align}|E(f(Y_n, Z) W)-E(f(Y, Z) W)|&\leqslant|E(f(Y_n, Z) W)-E(f_k(Y_n, Z) W)|\&\quad+|E(f_k(Y_n, Z) W)-E(f_k(Y, Z) W)|\&\quad+|E(f_k(Y, Z) W)-E(f(Y, Z) W)|\&\leqslant M_kM+|E(f_k(Y_n, Z) W)-E(f_k(Y, Z) W)|+M_kM,\end{align}and $M_k→0$ ($k→∞$), so$$\varlimsup_{n→∞}|E(f(Y_n, Z) W)-E(f(Y, Z) W)|\leqslant2M_kM,$$and make $k→∞$ to get $E(f(Y_n, Z) W)→E(f(Y, Z) W)$. – Ѕᴀᴀᴅ Apr 23 '18 at 08:16
  • @Saad I gather that the existence of a sequence $(g_k)$ is given by Lusin's Theorem. What do you mean by convergence in "m"? I am not exactly sure why the step from the equation before (1) to (1) holds. Could you please elaborate? Many thanks! – SafariPark Sep 22 '23 at 12:41