Basic understanding:
The definition of a r.v. is always in the context of a probability space $(\Omega, \mathscr F, P)$, where $\mathscr F$ is a $\sigma$-algebra of subsets (called "events") of the sample space $\Omega$, and $P:\mathscr F\to[0,1]$ is a probability measure; i.e. $P(A)$ is defined only for events $A\in\mathscr F$.
A random variable is a function which maps from the sample space to the real line, i.e. $X: \Omega \rightarrow \Bbb R$. The r.v. $X$ might map onto a subset of $\Bbb R$, depending on the situation.
There is also a requirement that the function $X: \Omega \rightarrow \Bbb R$ must be measurable (i.e., for every Borel subset $B\subseteq\Bbb R$, the set $\{\omega\in\Omega: X(\omega)\in B\}$ must be in the $\sigma$-algebra $\mathscr F$.
The pmf/pdf take as "input" real numbers $x \in Range(X)$ and assigns probabilities/densities to them...in other words, the domain of the pdf/pmf is the range of the r.v. $X$.
Yes, except that the domain of definition of the p.d.f/p.m.f is typically extended to all of $\Bbb R$, taking them to be zero outside the range of $X$. (We're using "range" here to mean "image", not codomain.)
Discrete r.v.s
For discrete distributions, the sample space is often meaningful; it is the domain of the r.v. of that distribution. For example, if $X\sim Binomial(n, p)$, the sample space of outcomes is $\Omega = Domain(X) = \{(TTT...TT), (TTT...TH), \dots\}$ representing all possible n-tuples of "n" heads or tails. The r.v. X is then a "counting" function, as it maps from each outcome in this sample space (which is an n-tuple) to the number of heads (a real number). Here, X has a purpose.
The distribution of a r.v. does not determine its domain. For example, the Binomial distribution can arise as you've described; however, mathematically, we could certainly have $\Omega=\{0,1,...,n\}$, with $X(\omega)=\omega,$ and the exact same p.m.f.
Continuous r.v.s
For most continuous distributions I have come across, the random variable is itself is just the identity mapping.
Again, the distribution of a r.v. does not determine its domain, but if the domain is not specified then we can always take the probability space to be $(\Omega,\mathscr F, P)=(\Bbb R, \mathscr B({\Bbb R}),P)$, where $P()$ is defined by the p.d.f./p.m.f. (or more generally the c.d.f.), with $X$ the identity function.
For the continuous uniform, the "sample space" is $\Bbb R$. Note that $f(x)$ here only has support on $[a, b]$, but is defined on $\Bbb R$, and thus the sample space is $\Bbb R$. Thus, $X: \Bbb R \rightarrow \Bbb R$ is once again an identity mapping.
Consider the Uniform distribution on $[0,1]$. Here are two quite different models:
(identity map) $(\Omega,\mathscr F,P)=(\Bbb R,\mathscr B({\Bbb R}),P)$, where $X$ is the identity function on $\Bbb R$, with p.d.f. $f_X(x)=1_{x\in[0,1]}$ for all $x\in\Bbb R$. For any Borel subset of $\Bbb R$, we have $P(X\in B):=\int_{B}f_X(x)\,dx.$
(coin tosses as random digits) $(\Omega,\mathscr F,P)=(\{0,1\}^\infty,\mathscr B(\{0,1\}^\infty),P)$, where $P$ is the product measure on the Borel subsets of $\{0,1\}^\infty$, whose marginals are Uniform on $\{0,1\}$, and $X(\omega_1\omega_2...):=(0.\omega_1\omega_2...)_2.$
This models a "thought experiment" whose outcome $\omega=(\omega_1,\omega_2,...)$ is the binary sequence resulting from tossing a fair "0-or-1" coin infinitely many times, and $X(\omega)$ is just that sequence read as a base-2 numeral (prefixed by "$0.$"). It can be shown that the distribution of $X$ is exactly Uniform on $[0,1]$.
More generally, we have the inverse probability integral transform theorem:
If $X$ is a r.v. whose c.d.f. is $F$, then the r.v. $F^{-1}(U)$ has the same distribution as $X$, where $F^{-1}$ is the generalized inverse of $F$ and $U$ is a r.v. with Uniform distribution on $[0,1]$.
Thus, if a r.v. with c.d.f. $F$ has unspecified domain, then the domain could be $[0,1]$, because the distribution can't be distinguished from that of r.v. $F^{-1}:\Omega\to\Bbb R$, where $(\Omega,\mathscr F,P)=([0,1],\mathscr B([0,1]),P)$ with Uniform distribution $P$.