Suppose $X, Y$ are bounded random variables and we have that for every $m, n$ positive integers, $\mathbb{E}[X^mY^n] = \mathbb{E}[X^m]\mathbb{E}[Y^n]$. Then show that $X, Y$ are independent.
I have some idea of how this is supposed to go:
Using linearity, etc, $\mathbb{E}[f(X)g(Y)]$ = $\mathbb{E}[f(X)]\mathbb{E}[g(Y)]$ for all polynomials $f, g$.
After this we use limit theorems to extend this across all measurable functions, and thus characteristic functions, which will let us conclude that for every measurable set $A, B$, we have that $\mathbb{E}[1_{X \in A}\cdot 1_{Y \in B}] = \mathbb{E}[1_{X \in A}\cdot 1_{Y\in B}]$ which reduces to $P(X \in A, Y \in B) = P(X \in A)P(Y \in B)$ which gives independence.
But all the limit identities and how to use them has never been natural to me, so I wanted to verify and ask for help for the details.
First, for continuous functions $f, g$, we use the Stone-Weierstrass theorem to get polynomials sequences $f_n, g_n$ that converge uniformly to $f, g$ on the bounded interval that $X, Y$ belong to. Then $\mathbb{E}[f_n(X)] \rightarrow \mathbb{E}[f(X)]$, and similar for $Y, g$. Which theorem exactly are we using here for this convergence (assuming I'm not doing something completely wrong of course); dominated convergence theorem? Once we get this, we also then demonstrate that $\mathbb{E}[f_n(X)g_n(Y)] \rightarrow \mathbb{E}[f(X)g(Y)]$ which requires showing $f_ng_n \rightarrow fg$, but this just levarages standard analysis techniques and uses the uniform convergence, am I correct?
Next we want to extend this to all measurable functions $f, g$. For this we use the idea that there are sequences $f_n, g_n$ that will converge a.e pointwise to $f, g$. Is this enough to conclude that $\mathbb{E}[f_n(X)] \rightarrow \mathbb{E}[f(X)]$, and similar for $Y, g$? It seems we need stronger assumptions here. I'm not entirely sure how valid this last step is and would appreciate some detail.