$\def\ed{\stackrel{\text{def}}{=}}$
In one sense, the answer to your question is trivially "yes", because the very definition of the term "random variable" makes it a measurable function from the sample space of a probability space into some other measure space. Of course, this obviously doesn't really answer your question because you're using the term "random variable" in a somewhat looser sense, as being somehow specified by a probability density function or probability mass function.
To rephrase your question in more precise language, therefore, I take it to be "Given a probability density function or probability mass function, does there always exist a random variable on the sample space of some probability space whose density or mass function is the given one?" The answer to that question is "yes". In fact you can always take the probability space to be $\ \big((0,1), \mathscr{B}_{(0,1)},\ell\big)\ ,$ where $\ \mathscr{B}_{(0,1)}\ $ is the collection of Borel sets of the open unit interval, and $\ \ell\ $ is Lebesgue measure—i.e. the uniform distribution on the unit interval. Alternatively, it's always possible, and usually more convenient, to choose the sample space to be the set $\ \mathbb{R}\ $ of real numbers and the random variable to be the identity function on $\ \mathbb{R}\ .$ You can always use the given density function or mass function to define an appropriate probability measure on the Borel subsets of $\ \mathbb{R}\ $ so that this will work.
Suppose you're given a density function $\ \varphi:\mathbb{R}\rightarrow\mathbb{R}\ .$ If $\ X\ $ is the identity function on $\ \mathbb{R}\ $, and you define a probability measure $\ P\ $ on the Borel subsets $\ \mathscr{B}_{\mathbb{R}}\ $ of $\ \mathbb{R}\ $ by
$$
P(A)\ed\int_A\varphi(x)\,dx
$$
for $\ A\in\mathscr{B}_{\mathbb{R}}\ ,$ then
\begin{align}
P\big(X^{-1}(A)\big)&=P(A)\\
&=\int_A\varphi(x)\,dx\ ,
\end{align}
so $\ \varphi\ $ is the density function of $\ X\ .$
Now suppose you're given a probability mass function $\ \mu:\mathscr{D}_\mu\rightarrow[0,1]\ ,$ where $\ \mathscr{D}_\mu\ $ is some countable subset of $\ \mathbb{R}\ .$ If you define a probability measure $\ P\ $ on the Borel subsets $\ \mathscr{B}_{\mathbb{R}}\ $ of $\ \mathbb{R}\ $ by
$$
P(A)\ed\cases{0&if $\ \mathscr{D}_\mu\cap A=\varnothing$\\
\displaystyle\sum_{x\in \mathscr{D}_\mu\cap A}\mu(x)&otherwise}
$$
for $\ A\in\mathscr{B}_{\mathbb{R}}\ ,$ then for $\ \mathscr{D}_\mu\cap A\ne\varnothing\ $ you have
\begin{align}
P\big(X^{-1}(A)\big)&=P(A)\\
&=\sum_{x\in \mathscr{D}_\mu\cap A}\mu(x)\\
&=\sum_{x\in \mathscr{D}_\mu\cap A}\mu(X^{-1}(x))\ ,
\end{align}
so $\ \mu\ $ is the probability mass function of $\ X\ .$
To show that you can always take the probability space to be $\ \big((0,1), \mathscr{B}_{(0,1)},\ell\big)\ $ it's easier to work with the cumulative distribution function. The cumulative distribution function $\ F_\varphi\ $ corresponding to a density function $\ \varphi\ $ is given by
$$
F_\varphi(x)\ed\int_{-\infty}^x\varphi(y)\,dy\ ,\tag{1}\label{e1}
$$
and the cumulative distribution function $\ F_\mu\ $ corresponding to a probability mass function $\ \mu:\mathscr{D}_\mu\rightarrow[0,1]\ $ is given by
$$
F_\mu(x)\ed\sum_{d\in \mathscr{D}_\mu: d\le x}\mu(d)\ .\tag{2}\label{e2}
$$
In general, the cumulative distribution function $\ F_Y
\ $ of a random variable $\ Y:\Omega\rightarrow\mathbb{R}\ $ on a probability space $\ (\Omega,\mathcal{F},P)\ $ is defined by
$$
F_Y(x)\ed P\big(Y^{-1}((-\infty,x])\big)\ ,
$$
and is a more general concept than than probability density functions or probability mass functions, in that there exist cumulative distribution functions that cannot be obtained from density functions or probability mass functions via the equations \eqref{e1} or \eqref{e2}.
It follows from the properties of probability spaces that the cumulative distribution function of any random variable is an increasing, right-continuous function $\ F:\mathbb{R}\rightarrow[0,1]\ $ with $\ \lim_\limits{x\rightarrow-\infty}F(x)=0\ $ and $\ \lim_\limits{x\rightarrow\infty}F(x)=1\ .$ Conversely, an answer to your question is provided by the following:
Theorem. If $\ F:\mathbb{R}\rightarrow[0,1]\ $ with is an increasing, right-continuous function with $\ \lim_\limits{x\rightarrow-\infty}F(x)=0\ $ and $\ \lim_\limits{x\rightarrow\infty}F(x)=1\ ,$ then $\ F\ $ is the cumulative distribution function of a random variable defined on the probability space $\ \big((0,1), \mathscr{B}_{(0,1)},\ell\big)\ .$
Proof (outline). Let
$$
X(\alpha)\ed\inf\{x\in\mathbb{R}\,|\,\alpha\le F(x)\,\}
$$
for $\ \alpha\in(0,1)\ .$ It's not difficult to show that
\begin{align}
\big(0,F(x)\big)&\subseteq X^{-1}\big((-\infty,x\,]\big)\\
&\ed\{\alpha\in(0,1)\,|\,X(\alpha)\le x\,\}\\
&\subseteq\big(0,F(x)\big]
\end{align}
and therefore that
$$
\ell\big(X^{-1}\big((-\infty,x\,]\big)\big)=F(x)\ .
$$
Thus, $\ X\ $ is a random variable defined on the probability space $\ \big((0,1), \mathscr{B}_{(0,1)},\ell\big)\ $ whose cumulative distribution function is $\ F\ .$