Given a set of real valued points $C:=\{x_i\colon\ i\in I\}$ where $I$ is a countable index set, I want to construct a probability space $(\Omega, \mathcal{A}, P)$ and a random variable $X\colon \Omega\rightarrow C$ such that $$P[X=x_i]=p_i$$ with $p_i\ge 0 ,\forall i\in I$ and $\sum_{i\in I} p_i=1$.
Here are my solutions:
1:
Set $(\Omega, \mathcal{A}, P)=(C,2^C, \mu)$ where $\mu(A):=\sum_{x_i\in A}p_i$ for a $A\in 2^C$. Then just pick $X=id$ and since the identity is always measurable we are done.
2:
Set $(\Omega,\mathcal{A},P)= (N|_I, \mathcal{B}(\mathbb{N}), \mu)$ with $\mu(A):=\sum_{i\in I} p_i\delta_{\{i\}}$ for $A\in\mathcal{B}(\mathbb{N)}$. I then define $X(i)=x_i$.
Are these examples correct? Is there a more systematic way to solve this problem (also for the continuous case)?
In my book I saw a theorem, that for every distribution function $F$ there is always a random variable and a probability space, such that $F_X=F$, meaning there is always a random variable which has the given distribution. Now the correspondence principle states that I can find a unique probability measure for every distribution function and vice versa. Do these two theorems combined give me the assurance, that for every probability distribution $P$ I can always find a random variable $X$ such that $P=P_X$?