An acclaimed answer to the question What does orthogonal mean in statistics? is beautifully stripped down to
$$\mathbb E[\mathbf {XY^*}]=0$$
I am not familiar with the operations involved, not just because of the inclusion of complex numbers, but most important, because I don't see the use of the inner product in this context. In this post:
The vector space $\mathscr L_2$ of real-valued random variables on $(\Omega,\mathscr F,\mathbb P)$ (modulo equivalence of course) with finite second moment is special, because it's the only one in which the norm corresponds to an inner product. If $X$ and $Y$ are random variables in $\mathscr L_2$, we define the inner product of $X$ and $Y$ by
$⟨X,Y⟩=\mathbb E[XY]$
In relations to the comments below I find this quote on Wikipedia:
For real random variables $X$ and $Y$, the expected value of their product $\displaystyle \langle X,Y\rangle :=\operatorname {E} (XY)$ is an inner product. This definition of expectation as inner product can be extended to random vectors as well.
The actual hurdle:
Now, this inner product is not the dot product of two vectors, is it? If the idea is to multiply the elements of two random vectors $X$ and $Y$ and then perform the inner product of this multiplication vector with itself, we'd be getting $[XY]^2$ - more like a norm... I think there is more to inner product, included in the quotes I pasted above, and requiring knowledge of abstract algebra. This probably includes the concept of the vector space $\mathscr L_2$.
What I got so far:
Since,
\begin{align} Cov[X,Y]=&E[(X−E[X])\cdot(Y−E[Y])]\\ &=E[X\cdot Y]−E[X\cdot E[Y]]−E[E[X]\cdot Y]+E[E[X]\cdot E[Y]]\\ &=E[X\cdot Y]−E[X]\cdot E[Y] \end{align} and consequently,
$$E[XY]=Cov[X,Y]+E[X]\cdot E[Y],$$
real-valued random variables $X$ and $Y$ are uncorrelated (different from independent) if and only if the centered variables $X-E(X)$ and $Y-E(Y)$ are orthogonal: $E[XY]=0.$