This is the theorem 13.1 in the graduate textbook of probability theory of Allan Gut (page 76, second edition):
Suppose $X$ is non-negative and $g$ is nonnegative strictly increasing and differentiable. Then $$\mathrm{E}[g(X)]=g(0)+\int_{0}^{\infty }g'(x)P(X>x) \mathop{}\!d x\tag1$$
I cannot see where the element $g(0)$ comes. I already knows that when $X$ is a non-negative random variable then we have that $$ \mathrm{E}[X]=\int_0^{\infty }P(X>x)\mathop{}\!d x\tag2 $$ Then applying a change of variable $x=g(y)$ we have that $$ \mathrm{E}[g(X)]=\int_{0}^{\infty }P(g(X)>x)\mathop{}\!d x=\int_{0}^{\infty }P(X>g^{-1}(x))\mathop{}\!d x=\int_{g^{-1}(0)}^{\infty }g'(y)P(X>y) \mathop{}\!d y\tag3 $$ But clearly (1) is different than (3). I must assume that $g^{-1}(0)=0$? There is something wrong in my derivation?
EDIT: I check this related question but the derivation is not clear in all the steps and it is neither clear where the $g(0)$ addend comes. Anyway I want to see where is my mistake.