0

I'm trying to do the following.

$$\max_{m\in\mathbb{R}} \mathbb{E}\left[\log (wA + (1-w)B_m)\right],$$

where $0<w<1$ and $A, B_m > 0$ are correlated random variables. $A$ does not depend on the parameter to be optimized $m$ and the partial derivative of $B_m$ w.r.t. $m$ is known. (Note: there is a lower bound using Jensen inequality).

An alternative problem (which would solve the problem for me) would be to prove:

$$m^\star = \arg\!\max_{m\in\mathbb{R}} \mathbb{E}\left[\log (wA + (1-w)B_m)\right] = \arg\!\max_{ m\in\mathbb{R} }{\mathbb{E}\left[\log B_m\right]}.$$

My intuition is that it should be true but I have not managed to nail down the proof (I'm not mathematician). I can prove that maximizing $\log(wA + (1-w)B_m)$ is equivalent to maximize $\log(B_m)$, but I'm not sure that would still hold for the expectations... Maybe it is possible to prove that by saying something on the lines of, - if it holds for all $\omega \in \Omega$, then it works for the expectation - (?)

Many thanks in advance.

Davide Giraudo
  • 181,608
Daniel
  • 319
  • You can refer to solution of http://math.stackexchange.com/questions/21063/difference-between-logarithm-of-an-expectation-value-and-expectation-value-of-a, maybe that could help. – parfois Jun 20 '13 at 03:26

1 Answers1

1

I think you might not be able to prove this at all, at least not in general: The problem might be that the RVs are correlated. But you can at least show that your maximization yields a lower bound by applying the strict concavity of the logarithm: $$ \log(w A+(1-w) B_m)>w\log(A)+(1-w)\log(B_m) $$