2

Let $A,B$ be continuous random variables. Let $E,G,K$ be events. Let $t \in image(B)$.

(I forgot if any 2 continuous random variables necessarily have a well-defined joint pdf. If not, then assume joint pdf is well-defined whenever necessary.)

Question 1: Can we say $P(B \le A|B=t) = P(t \le A|B=t)$?

Question 2: If so, then how? If not then why?

What I've tried: I think we're doing something like $P(E|K)=P(E \cap K|K)$, and I get it when $P(G|K)$ is defined for $P(K)>0$. What is being done here when technically $P(K)=0$? I mean, of course, in the 1st place when we say like '$P(\cdot|B=t)$', this is notational, we're not really conditioning on the $P$-null event $\{B=t\}$. But I still don't get exactly what's being done here. Well...

As I understand, $$RHS = P(t \le A|B=t) := \frac{\int_{t \le A} f_{A,B}(a,t) da}{\int_{\mathbb R} f_{A,B}(a,t) da} = \frac{\int_{t}^{\infty} f_{A,B}(a,t) da}{\int_{\mathbb R} f_{A,B}(a,t) da}$$

$$= \frac{\int_{t}^{\infty} f_{A,B}(a,t) da}{f_{B}(t)}= \int_{t}^{\infty} \frac{f_{A,B}(a,t)}{f_{B}(t)} da,$$ where/whence $f_{A|B=t}(a) := \frac{f_{A,B}(a,t)}{f_{B}(t)}$.

That's all I got.

Not really sure how to evaluate LHS. Perhaps LHS := RHS, i.e. LHS is defined as RHS?

Update: Wait I think I have an idea how to do LHS

$$P(B \le A|B=t) = \int \int_{b \le a} f_{(A,B)|B=t}(a,b) da db$$

Here,

  1. '$f_{(A,B)|B=t}(a,b)$' is I guess meant like $f_{(A,B)|T=t}(a,b)$ where $T$ is a random variable that is (not just almost surely but really) surely equal to $B$.

  2. '$f_{(A,B)|T=t}(a,b)$' is meant like 2 random variables conditioned on a 3rd random variable. As I understand multivariate conditional joint distributions, we have this is $f_{(A,B)|T=t}(a,b) := \frac{f_{A,B,T}(a,b,t)}{f_T(t)}$ and then...

  3. $$P(B \le A|B=t) = \int \int_{b \le a} f_{(A,B)|B=t}(a,b) da db = \int \int_{b \le a} \frac{f_{A,B,T}(a,b,t)}{f_T(t)} da db$$

$$= \int \int_{b \le a} \frac{f_{A,B,T}(a,b,t)}{\int \int_{(a,b) \in \mathbb R^2} f_{A,B,T}(a,b,t) da db} da db$$

$$= \frac{\int \int_{b \le a} f_{A,B,T}(a,b,t) da db}{\int \int_{(a,b) \in \mathbb R^2} f_{A,B,T}(a,b,t) da db} $$

  1. And then I don't know. Apparently we might not be able to have a joint pdf if 2 of the random variables are surely equal (maybe even for almost surely):

BCLC
  • 14,197

3 Answers3

1

Here is an argument that uses no measure theory that you might like. A measure theory argument that runs parallel to this one can be made using conditional expectation $E[1\{B \leq A\}|B]$ and by slightly modifying the claim.


Assume the density of $B$ is continuous at point $t$ and $f_B(t)>0$. Under this assumption, for an event $C$ that satisfies either $P[C]=0$ or $f_{B|C}(x)$ exists and is continuous at $x=t$, it can be shown the following limits exist and are equal: $$ \lim_{\delta \rightarrow 0^+} P[C|B \in [t, t+\delta]]=\lim_{\delta \rightarrow 0^+} P[C|B \in [t-\delta, t]]$$ So under these assumptions we can define $$ P[C|B=t] = \lim_{\delta \rightarrow 0^+} P[C|B \in [t, t+\delta]]$$


Now for any $\delta>0$ we have $$ P[B \leq A|B \in [t, t+\delta]] \leq P[t \leq A | B \in [t, t+\delta]] $$

Taking $\delta \searrow 0$ and assuming we converge appropriately gives $$ P[B \leq A | B=t] \leq P[t \leq A |B=t]$$

For the reverse inequality, observe that for any $\delta>0$ we have $$ P[B\leq A | B \in [t-\delta, t]] \geq P[t \leq A | B\in [t-\delta, t]] $$ Taking limits and assuming the limits converge appropriately gives $$ P[B \leq A | B =t] \geq P[t \leq A |B=t]$$

Michael
  • 26,378
1

Here is a measure theory argument that runs parallel to my non-measure theory argument.


Let $A, B$ be random variables. Suppose $B$ has a density $f_B(x)$ that is continuous at the point $x=t$, and $f_B(t)>0$. Fix $t \in \mathbb{R}$. Let $h:\mathbb{R}\rightarrow \mathbb{R}$ and $g:\mathbb{R}\rightarrow\mathbb{R}$ be bounded and measurable functions that define versions of the following conditional expectations: \begin{align} &h(B) = E[1\{B \leq A\}|B]\\ &g(B) = E[1\{t \leq A\}|B] \end{align}

Claim: If $h(x)$ and $g(x)$ are continuous at $x=t$ then $h(t)=g(t)$.

Proof: Fix $\delta>0$. Observe that $$ 1\{B \leq A\}1\{B \in [t, t+\delta]\} \leq 1\{t \leq A\}1\{B \in [t, t+\delta]\}$$ Thus $$ E[1\{B \leq A\}1\{B \in [t, t+\delta]\}] \leq E [1\{t \leq A\}1\{B \in [t, t+\delta]\}]$$ and so, by the definition of a conditional expectation: $$ E[h(B)1\{B \in [t, t+\delta]\}] \leq E[g(B)1\{B \in [t, t+\delta]\}]$$ Thus $$ \int_{t}^{t+\delta} h(x)f_B(x)dx \leq \int_t^{t+\delta}g(x)f_B(x)dx$$ This holds for all $\delta>0$ and since $h, g$ are both continuous at $x=t$, and $f_B(x)$ is continuous at $x=t$ and $f_B(t)>0$, we get $$ h(t)\leq g(t)$$ A similar argument shows the reverse inequality.

Michael
  • 26,378
1

Here is a third approach that assumes the joint PDF $f_{A,B}(a, b)$ exists.

Define $Z=B-A$. We have the PDF transformation: $$ f_{Z,B}(z,b) = f_{A,B}(b-z,b)$$ Then, assuming $f_B(t)>0$: \begin{align} P[B\leq A | B=t] &= P[Z\leq 0|B=t]\\ &=\int_{-\infty}^0 f_{Z|B}(z|t)dz\\ &=\int_{-\infty}^0 \frac{f_{Z,B}(z,t)}{f_B(t)} dz\\ &=\int_{-\infty}^0 \frac{f_{A,B}(t-z,t)}{f_B(t)}dz\\ &\overset{(a)}{=}\int_t^{\infty} \frac{f_{A,B}(u,t)}{f_B(t)}du\\ &=\int_t^{\infty} f_{A|B}(u|t)du\\ &=P[A\geq t|B=t] \end{align} where (a) uses the change of variable $u=t-z$ and $du=-dz$.

Michael
  • 26,378
  • thanks michael. 1 - in general do we really have to do all this joint pdf and $Z=B-A$ and stuff without measure theory? 2 - I mean how would you do this? $$\int_{b=0}^{b=1} P(B < A \le x+B, B < C \le y+B | B=b) f_B(b) db$$ $$ = \int_{b=0}^{b=1} P(b < A \le x+b, b < C \le y+b | B=b) f_B(b) db$$ or simply $$ P(B < A \le x+B, B < C \le y+B | B=b)$$ $$ = P(b < A \le x+b, b < C \le y+b | B=b) $$ 3 - btw the way this is done when $P(\text{denom}) > 0$ is indeed $P(E|K) = P(E \cap K | K)$ right? – BCLC Mar 27 '21 at 04:02
  • so Michael, how would you do this generally? – BCLC Apr 16 '21 at 05:10