A few years ago, I understood the classical MLP neural network much better when I wrote an implementation from scratch (using only Python + Numpy, without using tensorflow). Now I'd like to do the same for recurrent neural networks.
For a standard MLP NN with dense layers, the forward-propagation can be summarized by:
def predict(x0):
x = x0
for i in range(numlayers-1):
y = dot(W[i], x) + B[i] # W[i] is a weight matrix, B[i] the biases
x = activation[i](y)
return x
For a given single layer, the idea is just:
output_vector = activation(W[i] * input_vector + B[i])
What's the equivalent for a simple RNN layer, eg. SimpleRNN ?
More precisely, let's take an example of a RNN layer like this:
Input shape: (None, 250, 32)
Output shape: (None, 100)
Given an input x of shape (250, 32), with which pseudo-code can I generate the output y of shape (100,), of course by using weights, etc.?