0

This is the theorem 13.1 in the graduate textbook of probability theory of Allan Gut (page 76, second edition):

Suppose $X$ is non-negative and $g$ is nonnegative strictly increasing and differentiable. Then $$\mathrm{E}[g(X)]=g(0)+\int_{0}^{\infty }g'(x)P(X>x) \mathop{}\!d x\tag1$$

I cannot see where the element $g(0)$ comes. I already knows that when $X$ is a non-negative random variable then we have that $$ \mathrm{E}[X]=\int_0^{\infty }P(X>x)\mathop{}\!d x\tag2 $$ Then applying a change of variable $x=g(y)$ we have that $$ \mathrm{E}[g(X)]=\int_{0}^{\infty }P(g(X)>x)\mathop{}\!d x=\int_{0}^{\infty }P(X>g^{-1}(x))\mathop{}\!d x=\int_{g^{-1}(0)}^{\infty }g'(y)P(X>y) \mathop{}\!d y\tag3 $$ But clearly (1) is different than (3). I must assume that $g^{-1}(0)=0$? There is something wrong in my derivation?

EDIT: I check this related question but the derivation is not clear in all the steps and it is neither clear where the $g(0)$ addend comes. Anyway I want to see where is my mistake.

3 Answers3

1

In terms of the CDF $F$,$$\begin{align}E[g(X)]-\int_0^\infty g^\prime(x)P(X>x)dx&=\int_0^\infty[g(x)dF^\prime(x)-g^\prime(x)(1-F(x))dx]\\&=\color{red}{[g(x)F(x)]_0^\infty}-\color{blue}{\int_0^\infty g^\prime(x)dx}\\&=\color{red}{\lim_{x\to\infty}g(x)}-\left(\color{blue}{\lim_{x\to\infty}g(x)-g(0)}\right)\\&=g(0).\end{align}$$Your approach can be made to work too, but since $g(0)$ may be positive, your first step should have read$$E[g(X)]=g(0)+\int_{g(0)}^\infty P(g(X)>x)dx.$$(You can see this, for example, by writing $g(x)=g(0)+g_0(x)$.)

J.G.
  • 118,053
0

The mistake is assuming that $0$ is in the range of $g$. Since $X>0$, $g(X) > g(0)$ always, so:

$$\int_{0}^{\infty} P(g(X) > x)\, dx = \int_{0}^{g(0)} 1\, dx + \int_{g(0)}^{\infty} P(X > g^{-1}(x))\, dx$$ which gives the desired answer.

However, your approach can be saved by perhaps modifying $g$ on $(-\infty, 0)$ to remain strictly increasing while reaching $0$ at some (certainly $\le 0$) number. In that case, your integral can be split up as

$$\int_{g^{-1}(0)}^{0}g'(y)\cdot 1\, dy + \int_{0}^{\infty}g'(y)P(X>y)\, dy$$ which is, again, the desired answer.

0

A simpler approach is just using Tonelli's theorem, that is

$$ \begin{align*} \mathrm{E}[g(X)]&=\int_{[0,\infty ]}g(t)F_X(dt)\\ &=\int_{[0,\infty ]}(g(t)-g(0)+g(0))F_X(dt)\\ &=\int_{[0,\infty ]}\left(\left(\int_{[0,t]}g'(s)\,d s\right)+g(0)\right)F_X(dt)\\ &=g(0)\overbrace{\int_{[0,\infty ]}F_X(dt)}^{=1}+\int_{\{s,t\in \mathbb{R}:0\leqslant s\leqslant t\leqslant \infty \}}g'(s)\,d s\otimes F_X(dt)\\ &=g(0)+\int_{[0,\infty ]}\int_{[s,\infty ]}g'(s)F_X(dt)\,d s\\ &=g(0)+\int_{[0,\infty ]}\Pr [X\geqslant s]g'(s)\,d s\\ &=g(0)+\int_{[0,\infty ]}\Pr [X> s]g'(s)\,d s \end{align*} $$

where the last equality follows from the fact that the set $\{s:\Pr [X\geqslant s]\neq \Pr [X>s]\}$ is countable, and so it Lebesgue measure is zero.∎