19

Let $X$ be in $\mathfrak{L}^1(\Omega,\mathfrak{F},P)$ and $\mathfrak{G}\subset \mathfrak{F}$.

Prove that if $X$ and $E(X|\mathfrak{G})$ have same distribution, then they are equal almost surely.

I know what I have to show, that $X$ is $\mathfrak{G}$ measurable, but I don't know how...

Davide Giraudo
  • 181,608
Marc
  • 2,142
  • 21
  • 41

3 Answers3

17

Let's denote $Y = E(X|\mathfrak{G})$

$\color{blue}{\text{Step 1:}}$ let's suppose $X \in L^2$, then since $EX^2 = EY^2$, we have \begin{align} E\left(X - Y\right)^2 =& E(X^2) - 2E(XY) + E(Y^2) \\ =& EX^2 + EY^2 - 2E\left(E(XY|\mathfrak{G})\right) \\ =& EX^2 + EY^2 - 2E\left(YE(X|\mathfrak{G})\right) \\ =& EX^2 + EY^2 - 2EY^2 = 0 \end{align}

so $X=Y$ almost surely.

$\color{blue}{\text{Step 2:}}$ Now we remove the assumption $X \in L^2$. Then we need to consider $X' = X\wedge a \vee b$ and $Y' = Y\wedge a \vee b$, i.e. the truncated version of $X$ and $Y$. We will see $X' = Y'$ almost surely, then by sending $a \to +\infty$ and $b\to -\infty$, we see $X = Y$ almost surely.

To prove $X'= Y'$ almost surely, we will prove $E(X'|\mathfrak{G}) = Y'$, then since $X'$ and $Y'$ still have the same distribution, by $\color{blue}{\text{Step 1}}$ we see $X' = Y'$ almost surely.

So all we need to do now is to prove $$E(X'|\mathfrak{G}) = Y'$$

Firstly of all, $Y'$ is $\mathfrak{G}$-measurable.

Then by Jensen's inequality applied on conditional expectation, we have $$E(X\wedge a|\mathfrak{G}) \leq E(X|\mathfrak{G})\wedge a = Y\wedge a$$

However, since $E(X\wedge a) = E(Y\wedge a)$, the above inequality can't be strict on a set of positive probability, so we get

$$E(X\wedge a|\mathfrak{G}) = Y\wedge a$$

By a similar argument, we get

$$E(X\wedge a \vee b|\mathfrak{G}) = Y\wedge a\vee b$$

  • Nice! Actually, your proofs are not so different (I guess there are not thousand ways to solve the problem): your part with$X\wedge a$ correspond to the integration on $A$/$B$ in my answer, and $X\wedge a\vee b$ to that with $A'$, $B'$. – Davide Giraudo Nov 11 '14 at 17:38
  • @DavideGiraudo Yeah, we both tried to find the lost $L^2$ integrability by truncation – Petite Etincelle Nov 11 '14 at 17:43
  • Hello, I got a question. Why $\mathbb E(X-Y)^2$ implies X=Y almost surely? – Duke Jan 07 '15 at 02:28
  • @Duke Since ${X \neq Y} = \cup_{n=1}^\infty{|X-Y| \ge \frac{1}{n}}$, we have $P(X \neq Y) \leq \sum_{n=1}^\infty P(|X-Y| \ge \frac{1}{n}) $. Then if $P(X \neq Y) > 0$, we can find $N$ such that $P(|X-Y| \ge \frac{1}{N}) >0$, then $E(X-Y)^2 \ge \frac{1}{N^2}P(|X-Y| \ge \frac{1}{N}) >0$, so we get a contradiction – Petite Etincelle Jan 07 '15 at 08:33
6

Here, the main difficulty is that we do not assume finiteness of the expectation of $X^2$.

Fix a real number $x$ and define $A:=\{X\leqslant x\}$ and $B:=\{\mathbb E[X\mid\mathcal G]\leqslant x\}$. Using the assumption, we have $$\mathbb E[X\chi(A)]=\mathbb E[X\chi(B)].$$

Indeed, since $B$ belongs to $\mathcal G$, we have $$\mathbb E[X\chi(B)]=\mathbb E[\mathbb E[X\mid\mathcal G]\chi(B)] =\mathbb E[\mathbb E[X\mid\mathcal G]\chi\{\mathbb E[X\mid\mathcal G]\leqslant x\}],$$ and the random variables $X\chi\{X\leqslant x\}$ and $\mathbb E[X\mid\mathcal G]\chi\{\mathbb E[X\mid\mathcal G]\leqslant x\}$ have the same distribution.

Define $C_1:=A\setminus B$ and $C_2:=B\setminus A$. Since $\mathbb P(A)=\mathbb P(B)$, we have $$\mathbb E\left[(X-x)\chi(C_1)\right]=\mathbb E[(X-x)\chi(C_2)].$$ As $(X-x)\chi(C_1)\leqslant 0\leqslant (X-x)\chi(C_2)$, we get that $\mathbb P(A\Delta B)=0$. Define $A':=\{X\geqslant -x\}$ and $B':=\{\mathbb E[X\mid\mathcal G]\geqslant -x\}$. By the argument uses with $-X$ instead of $X$, we get $\mathbb P(A'\Delta B')=0$. Defining $A'':=A\cap A'$ and $B'':=B\cap B'$, we have $\mathbb P(A''\Delta B'')=0$ hence $$\mathbb E[\left(\mathbb E[X\mid\mathcal G]\right)^2\chi(A'')]=\mathbb E[\left(\mathbb E[X\mid\mathcal G]\right)^2\chi(B'')]=\mathbb E[X^2\chi(|X|\leqslant x)]$$ and $$\mathbb E\left[X\mathbb E[X\mid\mathcal G]\chi(A'')\right]=\mathbb E[\left(\mathbb E[X\mid\mathcal G\right)^2\chi(B'')],$$ hence $$\mathbb E\left[\left(X-\mathbb E[X\mid\mathcal G]\right)^2\chi\{|X|\leqslant x\}\right]=0.$$ As $x$ is arbitrary, the conclusion follows.

Davide Giraudo
  • 181,608
0

Here is how I argued:

If we take the usual simple functions $s_n\rightarrow Y^+$ and $s_{n}'\rightarrow \mathbb{E}[Y|\mathcal{G}]^+$, we have that by the monotone converge Theorem and by $\mathbb{P}(m/2^n<Y<(m+1)/2^n)=\mathbb{P}(m/2^n<\mathbb{E}[Y|\mathcal{G}]<(m+1)/2^n)$:

$$\int Y^+d\mathbb{P}=\int E[Y|\mathcal{G}]^+d\mathbb{P}$$

Now, by Jensen's inequality and by using that $x\rightarrow x^+$ is convex $\mathbb{E}[Y^+|\mathcal{G}]\geq(\mathbb{E}[Y|\mathcal{G}])^+$. However, the integral identity above teaches us that in fact $\mathbb{E}[Y^+|\mathcal{G}]=(\mathbb{E}[Y|\mathcal{G}])^+$ almost everywhere. Finally, if $S=\{Y^+>E[Y|\mathcal{G}]^++\varepsilon\}$: $$0=\int_S \mathbb{E}[Y^+|\mathcal{G}]-E[Y|\mathcal{G}]^+d\mathbb{P}=\int_S Y^+-E[Y|\mathcal{G}]^+d\mathbb{P}\geq \varepsilon\mathbb{P}(S)$$

Hence, $\mathbb{P}(Y^+>E[Y|\mathcal{G}]^+)=0$. By the same token, $\mathbb{P}(Y^+<E[Y|\mathcal{G}]^+)=0$ and so $Y^+=E[Y|\mathcal{G}]^+$ almost everywhere. Because $-Y$ and $\mathbb{E}[-Y|\mathcal{G}]$ also follow the same distribution, the computations above teach us that almost everywhere $Y^-=(-Y)^+=E[-Y|\mathcal{G}]^+=E[Y|\mathcal{G}]^-$ thus, the equalities bellow also happen almost everywhere:

$$Y=Y^+-Y^-=E[Y|\mathcal{G}]^+-E[Y|\mathcal{G}]^-=E[Y|\mathcal{G}]$$

Kadmos
  • 3,243