Search Results

Sequence to Sequence Learning (seq2seq)

This typically refers to the method originally described by Sutskever et al. in the paper Sequence to Sequence Learning with Neural Networks.

Feedforward neural networks and many other models can learn complex patterns, but require fixed-length input. This makes it difficult for these models to learn variable-length sequences. To solve this, the authors applied one LSTM to read the input seqeunce and a second LSTM to generate the output sequence.

A few potential applications of sequence to sequence learning include:

  • Machine translation
  • Text summarization
  • Speech-to-text conversion