Questions tagged [gaussian]

Gaussian refers to the Gaussian (or Normal) distribution. This is a continuous probability density function and is widely used in statistics.

Gaussian refers to the Gaussian (or Normal) distribution. This is a continuous probability density function and is widely used in statistics. It is given by:

$$f(x) = {{1}\over{\sigma \sqrt{2 \pi}}}e^{{-{{1}\over{2}}} \left({x-\mu}\over \sigma \right)^2} $$

82 questions
18
votes
5 answers

Anconda R version - How to upgrade to 4.0 and later

I use R through the anaconda navigator, which manages all my package installations. I need to use qgraph for a project, which is dependent on mnormt library, which in turn needs RStudio verion >4.0 I think the solution to my problem would be to…
Saranya Prakash
  • 193
  • 1
  • 1
  • 4
10
votes
1 answer

Why Gaussian latent variable (noise) for GAN?

When I was reading about GAN, the thing I don't understand is why people often choose the input to a GAN (z) to be samples from a Gaussian? - and then are there also potential problems associated with this?
asahi kibou
  • 143
  • 1
  • 5
9
votes
1 answer

Gaussian Mixture Models as a classifier?

I'm learning the GMM clustering algorithm. I don't understand how it can used as a classifier. Here are my thought: 1) GMM is an unsupervised ML algorithm. At least that's how sklearn categorizes it. 2) Unsupervised methods can cluster data, but…
8
votes
3 answers

Why does PCA assume Gaussian Distribution?

From Jon Shlens's A Tutorial on Principal Component Analysis - version 1, page 7, section 4.5, II: The formalism of sufficient statistics captures the notion that the mean and the variance entirely describe a probability distribution. The only…
Math J
  • 127
  • 1
  • 1
  • 4
7
votes
3 answers

What are the reasons for drawing initial neural network weights from the Gaussian distribution?

Are there theoretical or empirical reasons for drawing initial weights of a multilayer perceptron from a Gaussian rather than from, say, a Cauchy distribution?
5
votes
2 answers

Working with Data which is not Normal/Gaussian

What happens if my data/feature is not normal? Can I still use machine learning algorithms to utilize such data for predictions? I noticed in many data sciences courses, there is always a strong assumption of using a normal/Gaussian data. I have…
Newbie01
  • 53
  • 1
  • 4
4
votes
1 answer

Modifying a distribution by adding in samples incrementally

I would like to calculate the distribution (e.g., Gaussian) of a set of samples. However, I would also like to see how the distribution changes as I fit the samples into the distribution incrementally. One way to do this would be to compute the…
Arnold
  • 43
  • 3
4
votes
2 answers

Is it possible to train probabilistic model to return several distributions?

I have nonlinear data of function y(x), which is let's say parabolic. At some points of x there are several y's (look at the picture). Is it possible to train a probabilistic model to return several distributions (when needed) i.e. several means…
4
votes
1 answer

What mu and sigma vector really mean in VAE?

In standard autoencoder, we encode data to bottleneck, then decode with using initial input as output to compute loss. We do activate matrix multiplication all over the network and if we are good, initial input should be close to output. I…
Stenga
  • 175
  • 1
  • 7
4
votes
1 answer

Why are observation probabilities modelled as Gaussian distributions in HMM?

HMM is a statistical model with unobserved (i.e. hidden) states used for recognition algorithms (speech, handwriting, gesture, ...). What distinguishes DHMM form CHMM is the transition probability matrix P with elements. In CHMM, state space of…
4
votes
3 answers

Transform a skewed distribution into a Gaussian distribution

I have a skewed distribution that looks like this: How can I transform it to a Gaussian distribution? The values represent ranks, so modifying the values does not cause information loss as long as the order of values remains the same. I'm doing…
bkoodaa
  • 323
  • 3
  • 5
  • 8
3
votes
1 answer

Why Gaussian mixture model uses Expectation maximization instead of Gradient descent?

Why Gaussian mixture model uses Expectation maximization instead of Gradient descent? What other models uses Expectation maximization to find best optimal parameters instead of using gradient descent?
star
  • 1,521
  • 7
  • 20
  • 31
3
votes
1 answer

Mixture Density Network: determine the parameters of each Gaussian component

I am reading Bishop's Mixture Density Network paper at: https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/bishop-ncrg-94-004.pdf This is a good paper, but I am still confused about a few small details. I am wondering if anyone…
Edamame
  • 2,785
  • 5
  • 25
  • 34
3
votes
1 answer

Inductive Bias in Gaussian process

All supervised learning techniques have some kind of an inductive bias. What is the inductive bias in Gaussian process models ?
raK1
  • 133
  • 2
3
votes
1 answer

Gaussian Mixture Models EM algorithm use average log likelihood to test convergence

I was investigating scikit-learn's implementation of the EM algorithm for fitting Gaussian Mixture Models and I was wondering how they did come up with using the average log likelihood instead of the sum of the log likelihoods to test convergence.…
nyro_0
  • 153
  • 5
1
2 3 4 5 6