/encoder-decoder

Encoder-decoder model with attention (Luong), with two LSTM layers with 500 hidden units on both encoder and decoder side. The vocabulary size on both source (english) and target side (Dutch) is 50000. The model is trained on the train part of the TED dataset (https://wit3.fbk.eu/mt.php?release=2017-01-trnmted), maximum sequence length 50.

Primary LanguagePython

Stargazers