jinglescode/papers

Convolutional Sequence to Sequence Learning

jinglescode opened this issue · 0 comments

Paper

Link: https://arxiv.org/pdf/1705.03122.pdf
Year: 2017

Summary

  • fully convolutional model for sequence to sequence learning
  • use of gated linear units eases gradient propagation
  • separate attention mechanism for each decoder layer
  • outperforms strong recurrent models on very large benchmark datasets

The prevalent approach to sequence to sequence
learning maps an input sequence to a variable
length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks.1 Compared to recurrent models, computations over all
elements can be fully parallelized during training
to better exploit the GPU hardware and optimization is easier since the number of non-linearities
is fixed and independent of the input length. Our
use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016)
on both WMT’14 English-German and WMT’14
English-French translation at an order of magnitude faster speed, both on GPU and CPU.