5

I ran into a confusing question.

If two variables are independent, maybe they will be dependent after linear transformation.

How it can happen? Is it possible for independent variables?

What is the operation that it makes variables to be dependent?

In my opinion the transformation that maps all variables to a point is true for this fact.

What is wrong with my answer?

The mean of Variables are features of data.

mucciolo
  • 3,008
  • I am a bit confused by this question. If you have two random variables, say $X$ and $Y$, then $X+Y$ may not be independent to $X+2Y$, for example, and examples can be easily created. Why linear combination : there are far too many things you can do to make independent random variables dependent. Mapping everything to a point doesn't work though : the probability measure thus created is concentrated at one point, and therefore independent of any other probability distribution, because it always takes that point as a value regardless of what other random variables do. – Sarvesh Ravichandran Iyer Jan 25 '21 at 09:13
  • I think I need to converse more with you. I am not understanding your question , however if I can get you to write it clearly we can get it some attention. Also, I need to clarify if you know the definition of independence of random variables, and what notation you use, because in the answer below there has been some miscommunication unfortunately, and I don't want that to occur again. – Sarvesh Ravichandran Iyer Jan 25 '21 at 09:15
  • I suppose the converse statement is more interesting: If two variables are dependent, maybe they will be independent after linear transformation. – Arash Jan 26 '21 at 08:01

2 Answers2

1

The converse statement is even more interesting: sometimes it is possible to transform dependent random variables into independent ones. For example, let $X$ and $Y$ be jointly normal, zero-mean, unit variance, with correlation coefficient $\rho$. Define $V=X+Y$ and $W=X-Y$. $V$ and $W$, are also jointly normal and zero-mean. As for correlation: $$E\{VW\} = E\{X^2-Y^2\}=0$$ It is a simple and quite practical (in communication systems for instance) example of how a rotation can make it easier to deal with independent random variables instead of dealing with dependencies. Once you are done with the analysis, they can easily be reverted to the original variables. You can find many other examples like this, many of them with dramatic impacts. For instance, the main reason that the 4G cellular access links are way faster than 3G relies on a rather simple linear transform of signals, known as OFDMA. As fancy and complicated they might seem, the math behind them is quite simple: rotate the compound signals coming from hundreds of users so that they become independent, so you could deal with them separately. I am sure there are many abstract and practical examples that others can mention here. The bottom line is: statistical/probabilistic independence is subject to the representation of the phenomena and it is not necessarily inherent characteristics of the phenomena.

Arash
  • 788
0

If you are asking about taking linear transformations of each variable separately, then they will remain independent- see here. If you are asking about linear transformation of both variables together, then the transformed variables may or may not be independent.

John L
  • 1,533