-1

What is the state of the art of transforming input data for neural networks? They need to have constant width, and that's why I'm trying to wrap my head around that.

Let's say that we want to classify some books (for example, into some categories). Books have many attributes of different type, like:

  • short strings (title)
  • longer strings/documents (description)
  • dates (publishing date, author's birth date)
  • simple arrays (authors)
  • longitude/latitude (place where the book was finished, author's birth place)

How can one handle these attributes? I've read already a little about handling long strings here, but the rest, in especially small arrays of attributes are a mystery for me.

1 Answers1

-1

There is a trend towards implementations that don't need input sizes known in advance. Check out DyNet or Chainer for example.

From DyNet's technical paper:

In DyNet's dynamic declaration strategy, computation graph construction is mostly transparent, being implicitly constructed by executing procedural code that computes the network outputs, and the user is free to use different network structures for each input.

Ronen

RonenKi
  • 101
  • 1