1

Let $X_1,\ldots,X_n$ independent normally distributed random variables with variance $\sigma_1^2,\ldots \sigma_n^2 \in [0, \infty) $. Further more let $\alpha_1,\dots,\alpha_n,\beta_1,\dots,\beta_n$ numbers with $\sum_{k=1}^n \sigma_k^2 \alpha_k\beta_k=0$. Show that $X:=\sum X_k \alpha_k$ , $Y:=\sum X_k \beta_k$ are independent.

In general $cov(X,Y)=0$ does not implicate independence, but I am wondering if $cov(X,Y)=0$ in this case would implicate independence. But I am not able to proove it. My attempt so far:

$$cov(X,Y)=E[(X-E[X])(Y-E[Y])=E[XY]-E[X]E[Y]=E[(\sum X_k\alpha_k)(\sum X_k\beta_k)-E[\sum X_k\alpha_k]E[\sum X_k\beta_k]=E[\sum X_k\alpha_k\sum X_k\beta_k]-\sum \alpha_kE[X_k]\sum \beta_kE[X_k]=0 \textrm{ by expansion of the product. }$$ But I am not able to continue. Especially I do not know where to use $\sum_{k=1}^n \sigma_k^2 \alpha_k\beta_k=0$ Does my attempt has any content or am I on the wrong track? Comments and help are welcome!

M.Mass
  • 2,696
  • 1
    You are in right direction. Just separate the terms of $\sum_{i}\sum_{j}$ into $\sum\sum_{i\neq j}$ and $\sum\sum_{i=j}$ for both the terms in the RHS of last equality and use linearity of expectation and $Var(X) = E(X^2)-E(X)^2$. – Dhruv Kohli Jun 26 '17 at 17:47

1 Answers1

1

To Show $Cov(X,Y) = 0$

Hints:

$$\begin{align}E(\sum_i\alpha_iX_i\sum_j\beta_jX_j) &= E(\sum_{i=1}^{n}\alpha_i\beta_iX_i^2 + \sum\limits_{i\neq j}\alpha_i\beta_jX_iX_j) \\\\ &= \sum_{i=1}^{n}\alpha_i\beta_iE(X_i^2) + \sum_{i\neq j}\alpha_i\beta_jE(X_i)E(X_j)\end{align}$$

$$\sum_{i=1}^{n}\alpha_iE(X_i)\sum_{j=1}^{n}\beta_jE(X_j) = \sum_{i=1}^n \alpha_i\beta_iE(X_i)E(X_i) + \sum_{i\neq j}\alpha_i\beta_jE(X_i)E(X_j)$$

Use $Var(X_i) = E(X_i^2) - E(X_i)^2$ and the assumption given in the question.

It is easy to show that if $X$ and $Y$ have a jointly normal distribution with $0$ covariance then $f_{X,Y}(x,y) = f_{X}(x)f_{Y}(y)$ where $f_{X}$ and $f_{Y}$ are marginal distributions of $X$ and $Y$, implying independence of $X$ and $Y$.

$$Cov(X,Y) = E(\sum_i\alpha_iX_i\sum_j\beta_jX_j) - \sum_{i=1}^{n}\alpha_iE(X_i)\sum_{j=1}^{n}\beta_jE(X_j) = \sum_{i=1}^{n}\alpha_i\beta_i(E(X_i^2) - E(X_i)^2)$$

$$\implies Cov(X,Y) = \sum_{i=1}^{n}\alpha_i\beta_i\sigma_i^2 = 0, \text{(by assumption)}$$

Dhruv Kohli
  • 5,296
  • 1
    Regarding your last paragraph: https://en.m.wikipedia.org/wiki/Normally_distributed_and_uncorrelated_does_not_imply_independent . Here they are bivariate normal though, which is probably the simple demonstration you had in mind. – spaceisdarkgreen Jun 26 '17 at 18:14
  • Yes, I meant that. I have made an edit. Please check. I'll be more careful in future. – Dhruv Kohli Jun 26 '17 at 18:24
  • @expiTTp1z0 You made a typo in the rhs last equation, it should be $\alpha_i \beta_i E(X_i)E(X_i)$, or?(Instead of $\alpha_i \beta_i E(X_i)E(X_j)$ I do not understand some steps. To show that $cov(X,Y)=0$ I do not need the assumption given in my question like in my approach above. So when I have shown $cov(X,Y)=0$ I simply need to show $f_{X,Y}(x,y)=f_X (x) f_y (y)$? –  Jun 26 '17 at 20:22
  • @JohnDoe Thanks for pointing out the typo. I corrected it. You do need that assumption to show that $Cov(X,Y) = 0$, check my edit. And yes for your last sentence. – Dhruv Kohli Jun 26 '17 at 20:39
  • thanks. I was actually using it, but did not realise lol. thanks for great afford though! –  Jun 26 '17 at 21:00
  • @expiTTp1z0 Yep that's right. (They're jointly bivariate normal cause they're linear combos of the same underlying std normals. Easiest proof of both facts is by characteristic function.) – spaceisdarkgreen Jun 26 '17 at 22:30
  • @spaceisdarkgreen how would one point this out? would be nice if you could show a proof –  Jun 27 '17 at 21:45
  • 1
    @JohnDoe The joint cf for $X,Y$ is $$ \phi(t,s) = E(e^{itX +isY}) = E(e^{i\sum_k (t\alpha_k + s\beta_k)X_k}) \= \prod_k E( e^{i(t\alpha_k + s\beta_k)X_k}) \= \prod_k e^{i\mu_k(t\alpha_k + s\beta_k) + \frac{1}{2}\sigma_k^2(t\alpha_k+s\beta_k)^2}$$ where in the second line we used independence last line we used the formula for the cf of a std normal. Putting the sum back up in the exponent, you see that this is of the form $$ e^{i\vec \mu\cdot\vec t +\frac{1}{2}\vec t\cdot\Sigma\cdot\vec t}$$ where $\vec t = (t,s),$ which is the cf of a bivariate normal. – spaceisdarkgreen Jun 28 '17 at 03:28
  • @JohnDoe For the fact that bivariate normal with covariance zero implies independent, it's just that the covariance is the off-diagonal elements of $\Sigma,$ so if the covariance is zero, $\Sigma$ is diagonal. Then it's straightforward to show that the joint cf (or the joint pdf, if you prefer) factors, ensuring independence. – spaceisdarkgreen Jun 28 '17 at 03:32
  • 1
    @JohnDoe Doi... I realize now I put the wrong sign for all the quadratic terms... should be $-\frac{1}{2}\sigma^2t^2.$ Just a typo... doesn't change anything. – spaceisdarkgreen Jun 28 '17 at 04:03