Say we have two real-valued random variables $X,Y$ over the probability space $(\mathbb{R}, \Sigma_{\mathbb{R}}, \mu)$ where $\mu$ is uniform on $[0,1]$. Let's use $\phi_x$ and $\phi_y$ to denote the CDFs of $X$ and $Y$ respectively.
By the inverse probability transform, the functions $\phi_x^{-1}$ and $\phi_y^{-1}$ accept samples drawn from $(\mathbb{R}, \Sigma_{\mathbb{R}}, \mu)$ and return samples drawn from $(\mathbb{R}, \Sigma_{\mathbb{R}}, X_*\mu)$ and $(\mathbb{R}, \Sigma_{\mathbb{R}}, Y_*\mu)$ respectively.
Therefore, my understanding is that in the case that $\phi_x^{-1}$ and $\phi_y^{-1}$ are measurable, they themselves are random variables. How is the independence/dependence of these random variables related to the independence/dependence of $X$ and $Y$? Are these random variables always independent?
(This question Are right continuous functions measurable? suggests that $\phi_x$ and $\phi_y$ are always measurable. I imagine that the measurability of $\phi_x^{-1}$ and $\phi_y^{-1}$ is a separate consideration.)
$P[\phi_x^{-1}(X) \in A, \phi_y^{-1}(Y) \in B] = P[X \in \phi_x(A), Y \in \phi_y(B)]$
– fGDu94 Dec 16 '19 at 22:20Case 1: $X$ and $Y$ are independent standard normal random variables. Case 2: $X$ is a standard normal random variable and $Y(\omega)=X(\omega)$.
It seems that we have $\phi_x = \phi_y = \Phi_{\mathcal{N}(0,1)}$, but $X$ and $Y$ are independent in the first case and dependent in the second case
– gigalord Dec 17 '19 at 02:54$\phi_x^{-1}$ and $\phi_y^{-1}$ are not themselves random variables, they are functions. $\phi_x^{-1}(X)$ is a random variable, so is $\phi_y^{-1}(Y)$. This notation means: take whatever value $X$ is and apply this 1 to 1 mapping to it. The outcome is a new random variable $\phi_x^{-1}(X)$ which is you mentioned is distributed by $\mu$, i.e. uniform $[0,1]$.
You should check whether $\phi_x^{-1}(X)$ and $\phi_y^{-1}(Y)$ are independent whenever $X,Y$ are independent.
– fGDu94 Dec 17 '19 at 03:19