Consider two real-valued data signals $x(t)$ and $y(t)$, which are independent and uncorrelated. Let's say that each sample of $x(t)$ and $y(t)$ are both drawn from white-noise distributions with means $\mu_x, \mu_y$, and standard deviations $\sigma_x, \sigma_y$.
I'm interested in the cross-correlation of $x$ and $y$ over some finite duration of time $T$ with zero lag, so lets define the cross-correlation (with zero lag) as, $$\rho_{xy} \equiv \frac{1}{T} \int_{-T/2}^{+T/2} x(t) \, y(t) \, dt$$
(1) I'm pretty sure that $\lim_{T\rightarrow \infty} \left[ \rho_{xy} \right] = \mu_x \, \mu_y$.
(2) My intuition is that if these are continuously defined, and perfectly white noise, then $\rho_{xy} = \mu_x \, \mu_y$ exactly, even for finite $T$, because there is an arbitrary amount of structure at arbitrarily small time-intervals. In other words the standard deviation $\sigma_\rho = 0$. Is that correct?
I'm actually more interested in discretely sampled signals, $x_i \equiv x(t_i)$ [and the same for $y$]. Let's say there are $N$ samples in the interval $T$. Then, in this case the cross correlation is, $$c_{xy} \equiv \sum_{i=1}^{N} x_i \, y_i $$ (3) I'm pretty sure that $\lim_{N\rightarrow \infty} \left[ c_{xy} \right] = \mu_x \, \mu_y$. [This also seems to bolster the idea of (2).] (4) It should be the case that for any finite $N$, the ensemble average $\langle c_{xy} \rangle = \mu_x \, \mu_y$, but any 'realization' of $c_{xy}$ will deviate from this average. Now, the main question is, for a finite number of samples, what is the standard deviation of the discretely samples cross-correlation $\sigma_c$? Because there are only a finite number of samples, then the positive and negative correlations can't perfectly cancel out, so we should be left with some residual. How is that calculated?
[While my situation has distributions for $x(t)$ and $y(t)$ as described above, I would be interested in relatively straight-forward generalization to arbitrary distributions $p_x, p_y$ also]
Thanks!