Questions tagged [autoencoder]

Autoencoders are a type of neural network that learns a useful encoding for data in an unsupervised manner.

322 questions
22
votes
1 answer

What is "posterior collapse" phenomenon?

I was going through this paper on Towards Text Generation with Adversarially Learned Neural Outlines and it states why the VAEs are hard to train for text generation due to this problem. The paper states the model ends up relying solely on the…
thanatoz
  • 2,495
  • 4
  • 20
  • 41
20
votes
3 answers

Why are autoencoders for dimension reduction symmetrical?

I'm not an expert in autoencoders or neural networks by any means, so forgive me if this is a silly question. For the purpose of dimension reduction or visualizing clusters in high dimensional data, we can use an autoencoder to create a (lossy) 2…
dcl
  • 261
  • 1
  • 2
  • 7
17
votes
2 answers

What is the difference between an autoencoder and an encoder-decoder?

I want to know if there is a difference between an autoencoder and an encoder-decoder.
Kahina
  • 644
  • 1
  • 9
  • 24
13
votes
3 answers

How can autoencoders be used for clustering?

Suppose I have a set of time-domain signals with absolutely no labels. I want to cluster them in 2 or 3 classes. Autoencoders are unsupervised networks that learn to compress the inputs. So given an input $x^{(i)}$, weights $W_1$ and $W_2$, biases…
12
votes
2 answers

Does it make sense to train a CNN as an autoencoder?

I work with analyzing EEG data, which will eventually need to be classified. However, obtaining labels for the recordings is somewhat expensive, which has led me to consider unsupervised approaches, to better utilize our quite large amounts of…
10
votes
1 answer

Why is Reconstruction in Autoencoders Using the Same Activation Function as Forward Activation, and not the Inverse?

Suppose you have an input layer with n neurons and the first hidden layer has $m$ neurons, with typically $m < n$. Then you compute the actication $a_j$ of the $j$-th neuron in the hidden layer by $a_j = f\left(\sum\limits_{i=1..n} w_{i,j}…
10
votes
1 answer

Transforming AutoEncoders

I've just read Geoff Hinton's paper on transforming autoencoders Hinton, Krizhevsky and Wang: Transforming Auto-encoders. In Artificial Neural Networks and Machine Learning, 2011. and would quite like to play around with something like this. But…
Daniel Slater
  • 256
  • 1
  • 8
10
votes
2 answers

Transform an Autoencoder to a Variational Autoencoder?

I would like to compare the training by an Autoencoder and a variational autoencoder. I have already run the traing using AE. I would like to know if it's possible to transform this AE into a VAE and maintain the same outputs and inputs. Thank you.
Kahina
  • 644
  • 1
  • 9
  • 24
10
votes
1 answer

Robustness of ML Model in question

While trying to emulate a ML model similar to the one described in this paper, I seemed to eventually get good clustering results on some sample data after a bit of tweaking. By "good" results, I mean that Each observation was put in a cluster with…
10
votes
1 answer

Difference: Replicator Neural Network vs. Autoencoder

I'm currently studying papers about outlier detection using RNN's (Replicator Neural Networks) and wonder what is the particular difference to Autoencoders? RNN's seem to be treaded for many as the holy grail of outlier/anomaly detection, however…
Nex
  • 285
  • 2
  • 6
9
votes
1 answer

Validation loss is lower than the training loss

I am using autoencoder for anomaly detection in warranty data. Architecture 1: The plot shows the training vs validation loss based on Architecture 1. As we see in the plot, validation loss is lower than the train loss which is totally weird.…
9
votes
1 answer

Which type auto encoder gives best results for text

I did I couple of examples for auto encoders for images and they worked fine. Now I want to do an auto encoder for text that takes as input a sentence and returns the same sentence. But when I try to use the same auto encoders as the ones I used for…
sspp
  • 109
  • 2
  • 6
9
votes
2 answers

Why my training and validation loss is not changing?

I used MSE loss function, SGD optimization: xtrain = data.reshape(21168, 21, 21, 21,1) inp = Input(shape=(21, 21, 21,1)) x = Conv3D(filters=512, kernel_size=(3, 3, 3), activation='relu',padding='same')(inp) x = MaxPool3D(pool_size=(3, 3,…
sp_713
  • 115
  • 1
  • 2
  • 4
8
votes
1 answer

Why autoencoders use binary_crossentropy loss and not mean squared error?

Keras autoencoders examples: (https://blog.keras.io/building-autoencoders-in-keras.html) use binary_crossentropy (BCE) as loss function. Why they use binary_crossentropy (BCE) and not mse ? According to keras example, the input to the…
user3668129
  • 769
  • 4
  • 15
8
votes
1 answer

Using an autoencoder for anomaly detection on categorical data

Say a dataset has 0.5% of its features continuous and 99.5% categorical (binary) with ~2400 features in total. In this dataset, each observation is 1 of 2 classes - Fraud (1) or Not Fraud (0). Furthermore, there is a large class imbalance with only…
PyRsquared
  • 1,666
  • 1
  • 12
  • 18
1
2 3
21 22