I have seen white noise defined as a zero-mean stochastic process with the following autocorrelation function (in this question, for example Time continuous white noise): \begin{align*} E[X(t)]E[X(t+\tau)] = \begin{cases} \sigma^2, \tau=0 \\ 0, \tau \neq 0 \\ \end{cases} \end{align*}
However, my understanding is that there are fundamental difficulties with constructing uncountably many independent non-constant random variables in the first place. The argument put forth in the answer to Showing that there do not exist uncountably many independent, non-constant random variables on $ ([0,1],\mathcal{B},\lambda) $. seems to only depend on the orthogonality of the random variables in the collection, and so it seems to suggest that uncountably many uncorrelated non-constant random variables do not exist on $([0,1], \mathbb{B}, \lambda)$.
This leads me to the following questions:
- Is my assumption that the argument in Showing that there do not exist uncountably many independent, non-constant random variables on $ ([0,1],\mathcal{B},\lambda) $. applies to both independent and uncorrelated random variables incorrect, and therefore the definition of continuous white noise above works just fine?
- Does the definition of white noise implicitly assume that the random variables are defined on some probability space other than $([0,1], \mathbb{B}, \lambda)$ whereupon uncountably many independent non-constant random variables can exist?