1

What's the interpretation of the $-\frac{(x-\mu)^2}{2{\sigma}^2}$ of Gaussian function?

I read that it's:

The distance of $x$ to its mean $\mu$. And we square it so that we don't have to care about from which direction it comes. I.e. if mean is 2, then 1 and 3 have the same distance. Or, also, we use "squared distance" instead of "absolute distance".

But if we divide this by $2\sigma^2$. Then...?

We take a "ratio" of the above distance to the "total distance" (which variance signifies)? Why do we double it?

So gaussian function is an exp-transformation (or composition) of a measure of (squared) distance to total distance? And i.e. a point of it measures "portion" out of "total distance/measure"?

mavavilj
  • 7,472
  • To see the derivation of mean and variance for a normal distribution: https://math.stackexchange.com/questions/518281/how-to-derive-the-mean-and-variance-of-a-gaussian-random-variable – Chris Henson Jan 09 '20 at 21:29

3 Answers3

2

If $\sigma>0$, $X$ has mean $\mu$ and standard deviation $\sigma$ iff $Z:=\frac{X-\mu}{\sigma}$ has mean $0$ and variance $1$, i.e. iff $\Bbb EZ=0,\,\Bbb EZ=1$. We can prove $\frac{1}{\sqrt{2\pi}}\exp-\frac{z^2}{2}$ is the PDF of such a distribution, and (which is more obvious) that $0$ is also its median and mode. But $Z$ has this PDF iff $X$ has pdf $\frac{1}{\sigma\sqrt{2\pi}}\exp-\frac{(x-\mu)^2}{2\sigma^2}$.

J.G.
  • 118,053
0

As an analogy, I can take machine learning, where we punish for values, that are far from our mean. "Ratio" or $2 \sigma^2$ defines how strong we punish. In terms of distance, it can be understood as if $\sigma^2$ is small - closer to mean dots are more valuable.

And we double "ratio" cause we need to take into account positive and negative direction (and probably because of better values of variance and mean after integrating).

I probably did not answer your question, but this is as far as I can understand it :)

  • Hmm. So you mean that if we have e.g. mean=2 and values 1 and 3. Then variance $\sigma^2=1$ since this is distance of 1 and 3 from mean. However the "total variance" or "spread" would be $2 \sigma^2=2 \cdot 1=2$ since $3-1=2$. But this is slightly confusing, since in the case of normal distr. the $2 \sigma^2$ doesn't give total variance? https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQ4cLDsT6v6BMZc7jnTfrtzc6W-fcuhYmy6IYEhgKf2tkUPcPbrQg&s – mavavilj Jan 10 '20 at 11:42
0

Let our probability distribution be $P(x)=Ae^{-\alpha x^2}$. This is the Normal or Gaussian Probability Distribution colloquially referred to as the Bell Curve.

It can be shown that we need $A=\sqrt{\alpha/\pi}$ if we want its integral to be 1, a proper probability distribution.

What is the standard distribution of $P(x)$? By definition we need to find $\sigma^2=\int_{-\infty}^\infty x^2Ae^{-\alpha x^2} dx-(\int_{-\infty}^\infty xAe^{-\alpha x^2} dx)^2$.

The second integral is zero because of symmetry. The first integral will give you $\sigma^2=\frac{1}{2\alpha}$.

This means $P(x)=\sqrt{\frac{\pi}{\alpha}}e^{-\frac{x^2}{2\sigma^2}}$.

$\sigma$ is the standard deviation of the curve. Geometrically, this is a measure of how wide it is. The bigger $\alpha$, and consequently the smaller $\sigma$, the further away from the mean the "tail" of the bell curve is.

What happens if we use instead $P(x)=Ae^{-\alpha(x-\mu)^2}$. What happens as we change $\mu$? If $\mu=0$, then we have a "Bell" curve peaked at the origin and symmetric on either side. If we increase $\mu$, we keep the same shape and size of the curve, but move its peak to the right. If we decrease, we move it to the left.

How do we know this $\mu$ is the mean?

Consider $\int_{-\infty}^\infty xAe^{-\alpha (x-\mu)^2} dx=\int_{-\infty}^\infty (x-\mu)Ae^{-\alpha(x-\mu)^2}dx + \int_{-\infty}^\infty \mu Ae^{-\alpha(x-\mu)^2}dx=\mu$.

TurlocTheRed
  • 6,458