1

I'm reading the following paper about bribing and signaling in second price auctions, and having some difficulty in understanding some parts.

On page 8 they develop the following formula which describes when will bidder $j$ try to bribe bidder $i$

$$F(A)(θ_j − b) + \mathbb{E}_{θ_i} [(θ_j − θ_i)_{(A<θ_i≤θ_j )}]\ge \mathbb{E}_{θ_i} [(θ_j − θ_i)_{(θ_i≤θ_j )}$$

which I understand, but then he wants to show the group $\mathbb{B}$ is an interval, so he wants to show that if the equation holds for some $θ_j$ it will hold for any $\hat{θ_j} \lt θ_j$.

So he differentiate both sides with respect to $θ_j$ to get

$$\frac{\partial LHS(θj )} {\partial θj} = max\{F(A), F(θ_j )\}\ge F(θ_j ) = \frac{\partial RHS(θ_j )}{\partial θ_j} $$

This is the part I dont understand, how do u differentiate that equation? and how do you get this result?

moreover, I'm trying to extend this result to the case of 3 players, so I have the equation

$$F^2(A)(θ_j − 2b) + \mathbb{E}_{θ_i, θ_k} [(θ_j − max\{θ_i, θ_k\})_{(A<max\{θ_i,θ_k\}≤θ_j )}]\ge \mathbb{E}_{θ_i} [(θ_j − max\{θ_i, θ_k\})_{(max\{θ_i,θ_k\} ≤θ_j )}$$

How can I differentiate this equation?

1 Answers1

0

Remember that both differentiation and expectation are linear operations, so we can interchange them if the RV is integrable (see here).

To get you started with an informal derivation, consider the RHS: \begin{align} \frac{\partial}{\partial \theta_j} \mathbb{E}_{\theta_i} \left[ (\theta_j-\theta_i)\, \mathbb{I}\{ \theta_i \leq \theta_j\} \right] = \mathbb{E}_{\theta_i} \left[ \frac{\partial}{\partial \theta_j} \left( (\theta_j-\theta_i)\, \mathbb{I}\{ \theta_i \leq \theta_j\} \right) \right] \end{align} and \begin{align} \frac{\partial}{\partial \theta_j} (\theta_j-\theta_i)\, \mathbb{I}\{ \theta_i \leq \theta_j\} &= \frac{\partial}{\partial \theta_j} (\theta_j-\theta_i)\, \times \mathbb{I}\{ \theta_i \leq \theta_j\} + \frac{\partial}{\partial \theta_j}\mathbb{I}\{ \theta_i \leq \theta_j\} \times (\theta_j-\theta_i) \\ &= \mathbb{I}\{ \theta_i \leq \theta_j\} + 0 = \mathbb{I}\{ \theta_i \leq \theta_j\} \end{align} where we're being a bit loose with differentiating the derivative of the indicator (not treating the case $\theta_i=\theta_j$, in which case we might want something related to differentiating the Heaviside step function).

Now we have \begin{align} \mathbb{E}_{\theta_i} \left[ \frac{\partial}{\partial \theta_j} \left( (\theta_j-\theta_i)\, \mathbb{I}\{ \theta_i \leq \theta_j\} \right) \right] = \mathbb{E}_{\theta_i}\left[ \mathbb{I}\{ \theta_i \leq \theta_j\} \right] = \mathbb{P}\left[ \Theta_i \leq \theta_j \right] =: F(\theta_j) \end{align} from the definition of $F$ as a CDF earlier in the paper. Note also that I switched to upper case $\Theta$ to denote it as a RV.

jjjjjj
  • 2,779
  • what does RV stands for? also, by following the same logic on the LHS, we have two summand, the derivative of the first one is $F(A)$ and the sceond one is $\mathbb{P}\left[ A \leq \Theta_i \leq \theta_j \right] =: F(\theta_j) - F(A)$ so adding them up gives me $F(θ_j)$ . How did they get $max{F(A), F(θ_j)}$ – Daniel Katzan Jun 05 '18 at 05:44
  • Sorry, i meant RV for "random variable". Let me take a look again tomorrow morning – jjjjjj Jun 05 '18 at 06:03
  • Roughly, the case you give assumes $\theta_j > A$. Depending on the relationship between $\theta_j$ and $A$, we can have the RHS be $F(A)$ (i.e., if $\theta_j \leq A$). So we either have $F(A)$ or $F(\theta_j)$, though I agree that this could be made more explicit... – jjjjjj Jun 05 '18 at 15:24