1

Why is the Variance of a random variable X defined as E[(X−E[X])2] instead of E[|X−E[x]|], where E(X) is the Expected Value of X?.

Let's suppose that all values of X are positive, then E[X] must be positive, and X−E[x] is positive for the values of X inferior to E[X] and negative for the values of X which are superior to E[x]. Taking all the positive values and calculate their Expected value E[|X−E[x]|], it would adequately compute the dispersion of X.

Why is the square required in the definition of the variance?

  • 1
    Why is this question being asked on cs.SE and not math.SE? Computer scientists did not define variance, and as far as I know, its usage in computer science is no different than its usage in math. – Peter Shor Sep 07 '13 at 12:29
  • 3
    This is answered on CrossValidated: http://stats.stackexchange.com/questions/118/why-square-the-difference-instead-of-taking-the-absolute-value-in-standard-devia – jogloran Sep 07 '13 at 12:56
  • It has also been asked here before: http://math.stackexchange.com/q/288068 – robjohn Sep 07 '13 at 13:03

0 Answers0