Consider the probability space $(\Omega_1,2^{\Omega_1},\mathbb{P}_1)$, where $\Omega_1$ is a sample space, $2^{\Omega_1}$ (powerset of $\Omega_1$) is the $\sigma$-algebra over $\Omega_1$, and $\mathbb{P}_1$ is a probability measure. Likewise, consider the probability space $(\Omega_2,2^{\Omega_2},\mathbb{P}_2)$.
In particular, suppose the first probability space corresponds to the probabilistic experiment of choosing a plaintext at random from the set $\Omega_1 = \{a,b\}$, where $\mathbb{P}_1(a) = 1/4$ and $\mathbb{P}_1(b) = 3/4$. Similarly, suppose that the second probability space corresponds to the probabilistic experiment of choosing a key at random from the set $\Omega_2 = \{K_1,K_2,K_3\}$, where $\mathbb{P}_2(K_1) = 1/2$ and $\mathbb{P}_2(K_2) = \mathbb{P}_2(K_3) = 1/4$.
Now, I'd like to "combine" these two random experiments into a single random experiment whose probability space is such that its sample space is the cross product of the two component sample spaces. That is, the combined probability space is $(\Omega_1 \times \Omega_2, 2^{\Omega_1 \times \Omega_2}, \mathbb{P} )$ that reflects the fact these probabilistic experiments are "independent". That is, $\mathbb{P}(\{(p,k)\}) = \mathbb{P_1}(p)\mathbb{P_2}(k)$. Is there a formal way to do this?
The problem as I see it is that, from what I've learned so far about probability measure theory, the concept of independence applies only to events within the same $\sigma$-algebra. Thus, within this formal framework, it doesn't seem to make sense to talk about the "independence" of the event of picking some plaintext and the event of picking some key, since plaintexts and keys belong to completely different sample spaces (and completely different probability spaces) in this context.