Well, a very simple (even trivial) example would be the identity, i.e.
$$(\Omega,~\mathcal{F},~\mathbb{P}) = \left((a,b),~\mathcal{B}_{\Omega},~\mathcal{U}_{\Omega}\right),~~ X = \mathrm{id}_{\Omega}.$$
However, a common idea is that the point of dealing with the distribution of a random variable $X$ is that we don't have to care about the underlying space $(\Omega,~\mathcal{F},~\mathbb{P})$ and the mapping $X$: Say, we take $(a,b) = (-1,1)$, then another example would be given by
$$\Omega = (-2,-1)\cup[0,1],~\mathcal{F} = \mathcal{B}_{\Omega},~\mathbb{P} = \mathcal{U}_{\Omega}$$ with
$$X:\Omega\rightarrow(-1,1),~\omega\mapsto\left\{\begin{array}{cl}\omega+1,&\omega<0,\\\omega,&\omega\geq 0.\end{array}\right.$$
Of course, this doesn't change anything about the behaviour of $X$ as a uniformly distributed r.v. and we could come up with endless (less simple) examples.
I suppose a related question that's more interesting is how you obtain uniformly distributed (or other distributions) r.v.s through transformation or combination of other r.v.s:
For instance, there is a classical result saying that for statistical tests (under certain general conditions) the $p$-value will be uniformly distributed on $(0,1)$ under the null hypothesis.
Or if $X,Y$ are iid geometrically distributed and you condtion on $\{X + Y = z\}$, then $X$ and $Y$ will be uniformly distributed on $\{0,1,\ldots,z\}$.
Or also the Poisson distribution as a limit (in a sense) of binomial distributions...