1

Say we have two real-valued random variables $X,Y$ over the probability space $(\mathbb{R}, \Sigma_{\mathbb{R}}, \mu)$ where $\mu$ is uniform on $[0,1]$. Let's use $\phi_x$ and $\phi_y$ to denote the CDFs of $X$ and $Y$ respectively.

By the inverse probability transform, the functions $\phi_x^{-1}$ and $\phi_y^{-1}$ accept samples drawn from $(\mathbb{R}, \Sigma_{\mathbb{R}}, \mu)$ and return samples drawn from $(\mathbb{R}, \Sigma_{\mathbb{R}}, X_*\mu)$ and $(\mathbb{R}, \Sigma_{\mathbb{R}}, Y_*\mu)$ respectively.

Therefore, my understanding is that in the case that $\phi_x^{-1}$ and $\phi_y^{-1}$ are measurable, they themselves are random variables. How is the independence/dependence of these random variables related to the independence/dependence of $X$ and $Y$? Are these random variables always independent?

(This question Are right continuous functions measurable? suggests that $\phi_x$ and $\phi_y$ are always measurable. I imagine that the measurability of $\phi_x^{-1}$ and $\phi_y^{-1}$ is a separate consideration.)

gigalord
  • 337
  • they are independent iff $X,Y$ are independent. To see this, use:

    $P[\phi_x^{-1}(X) \in A, \phi_y^{-1}(Y) \in B] = P[X \in \phi_x(A), Y \in \phi_y(B)]$

    – fGDu94 Dec 16 '19 at 22:20
  • @GeorgeDewhirst I am not sure I understand this. I am thinking of the following cases:

    Case 1: $X$ and $Y$ are independent standard normal random variables. Case 2: $X$ is a standard normal random variable and $Y(\omega)=X(\omega)$.

    It seems that we have $\phi_x = \phi_y = \Phi_{\mathcal{N}(0,1)}$, but $X$ and $Y$ are independent in the first case and dependent in the second case

    – gigalord Dec 17 '19 at 02:54
  • there is some misunderstanding here which I will clarify.

    $\phi_x^{-1}$ and $\phi_y^{-1}$ are not themselves random variables, they are functions. $\phi_x^{-1}(X)$ is a random variable, so is $\phi_y^{-1}(Y)$. This notation means: take whatever value $X$ is and apply this 1 to 1 mapping to it. The outcome is a new random variable $\phi_x^{-1}(X)$ which is you mentioned is distributed by $\mu$, i.e. uniform $[0,1]$.

    You should check whether $\phi_x^{-1}(X)$ and $\phi_y^{-1}(Y)$ are independent whenever $X,Y$ are independent.

    – fGDu94 Dec 17 '19 at 03:19
  • @GeorgeDewhirst thank you for getting back to me. I agree that $\phi_x^{-1}(\omega)$ and $\phi_y^{-1}(\omega)$ are measurable functions, just like $X(\omega)$ and $Y(\omega)$ (my understanding is that a random variable is just a measurable function). $\phi_x^{-1}(X(\omega))$ and $\phi_y^{-1}(Y(\omega))$ are also random variables, but it seems they are distinct from $\phi_x^{-1}(\omega)$ and $\phi_y^{-1}(\omega)$. Even if $\phi_x^{-1}(X(\omega))$ and $\phi_y^{-1}(Y(\omega))$ are independent, I do not understand how this implies $\phi_x^{-1}(\omega)$ and $\phi_y^{-1}(\omega)$ are independent – gigalord Dec 17 '19 at 13:15
  • $\phi_x^{-1}(\omega)$ and $\phi_y^{-1}(\omega)$ aren't random variables. $\phi_{x}:\mathbb{R} \to \mathbb{R}$, $\phi_{y}:\mathbb{R} \to \mathbb{R}$. – fGDu94 Dec 17 '19 at 23:48
  • @GeorgeDewhirst I think this is the source of my confusion. My understanding is that the only condition for a real-valued function on a probability space to be a random variable is for it to be measurable. In the case that $\phi^{-1}_x$ and $\phi^{-1}_y$ are measurable, why would they not be random variables? – gigalord Dec 18 '19 at 16:20

1 Answers1

2

They are not independent. In fact, they'll be organized by their percentiles. I'm going to change our notation a little bit to try and clarify this point.

Let our underlying probability space be $([0,1],\mathcal{B}[0,1],\lambda)$, the unit interval with the Borel $\sigma$-field and the Lebesgue measure.

Now, with your question, $X$ and $Y$ really do nothing more than provide probability laws. (You introduced them as random variables, but immediately took their CDFs, so it may be best to think of $X$ and $Y$ as given distributions over $\mathbb{R}$, rather than the functions from $[0,1]$ to $\mathbb{R}$.) So, we are given two CDFs, $F_X, F_Y$.

We note that $F_X^{-1} : [0,1] \rightarrow \mathbb{R}$ is a random variable with the desired distribution. Similarly, $F_Y^{-1}$. And, furthermore, when we pick $\omega \in [0,1]$, $F_X^{-1}(\omega)$ actually evaluates to the $\omega$ percentile; e.g. if $\omega = 0.5$, $F_X^{-1}(\omega)$ and $F_Y^{-1}(\omega)$ both would be equal to the median of the distributions.

For even more concreteness, consider the trivial case where $X$ and $Y$ are uniform on $[0,1]$. In this case, $F_X^{-1}(\omega) = F_Y^{-1}(\omega) = \omega$ and they exactly equal each other. Definitely not independent!

Roy D.
  • 1,019
  • Thanks for your answer. It seems from this that at the very least, $F_X^{-1}$ and $F_Y^{-1}$ will be dependent whenever $X$ and $Y$ have the same distribution (as you say, since we begin by immediately taking the CDFs of $X$ and $Y$ the independence/dependence between them is irrelevant). Do you know if there is a more general rule for this? – gigalord Jan 12 '20 at 14:45
  • The general rule is as stated above. When $\omega \in [0,1]$ is chosen, both $F_X^{-1}$ and $F_Y^{-1}$ will evaluate to the $\omega$ percentile. For example, when $\omega = 0.5$, $F_X^{-1}(\omega)$ gives the median of $X$ and similarly $F_Y^{-1}(\omega)$. – Roy D. Jan 13 '20 at 02:11
  • I guess I am wondering about the phenomenon that i.i.d random variables $X,Y$ have different functional forms, different induced probability measures over $\mathbb{R}$, different pushforward measures $X_*\mu$ over $[0,1]$, different cumulative distribution functions $F_X,F_Y$, but the same inverse cumulative distribution functions $F_X^{-1},F_Y^{-1}$. I suppose this is because the inversion procedure is not 1-to-1 here? – gigalord Jan 13 '20 at 19:39