1

I'm reading through Sebastian Raschka's Python Machine Learning, and I see something confusing that is not explained in the text.

In the code on this page: https://github.com/rasbt/python-machine-learning-book/blob/master/code/ch02/ch02.ipynb

Implementing a perceptron learning algorithm in Python

In the training process, in addition to updating weights, I see this happening:

 self.w_[0] += update

Then later on, during "prediction," when weights are applied to input, I see self.w_[0] being used:

def net_input(self, X):
    """Calculate net input"""
    return np.dot(X, self.w_[1:]) + self.w_[0]

It looks like this is a bias being added into the perceptron, but the book says that net_input is simply calcuating "weights transpose dot x" and mentions nothing about this + self.w_[0] part...

Can anyone take a look at the linked code and make sense of what's going on with the self.w_[0] part? Or has anyone else got this book that might explain why that's there?

tmsimont
  • 113
  • 3

1 Answers1

1

I suspect the book's English explanation is probably a simplification (or maybe it has adopted the standard convention of adding an extra dimension to x that holds a constant value $1$). It seems like most likely you've already given the explanation; it's adding the bias value.

D.W.
  • 167,959
  • 22
  • 232
  • 500