4

I understand how KL divergence provides us with a measure of how one probability distribution is different from a second, reference probability distribution. But why are they particularly used (instead of cross-entropy) in VAE (which is generative)?

den.run.ai
  • 103
  • 3

1 Answers1

0

Answering with some theoretical understanding of Variational auto-encoders.

In the general architecture of encoders and decoders, the encoder encodes the input a latent-space, and the decoder reconstructs the input from the encoded latent space.

However, the Variational auto-encoders (VAE), the input is encoded to a latent-distribution instead of a point in a latent space. This latent distribution is considered to be Normal Gaussian distribution (Which can be expressed in terms of mean and variance). Further, decoders samples a point in this distribution and reconstructs the input. Since, VAE encoder encodes to a distribution than a point in a latent space, and KL divergence is use to measure the difference between the distribution, it is used as a regularization term in the loss function.

Ashwin Geet D'Sa
  • 1,217
  • 2
  • 11
  • 20