This is a PyTorch implementation of the paper Topic-to-Essay Generation with Neural Networks of IJCAI2018.
MTA-LSTM stands for Multi-Topic-Aware LSTM, ultilizing multi-topic coverage vector which learns the weight of each topic during the decoding process, and is sequentially updated. The original implementation written in TensorFlow can be found here, but it's out-of-date and is no longer maintained. Therefore I decided to use PyTorch, which is easier and more straight forward than TensorFlow in my opinion, to reimplement the paper.
The first link are 2 datasets provided by the author of the paper, and the second link, the news dataset, is prepared by myself. Simply download the files and put them into data
folder.
- Python3
- PyTorch >= 1.2.0
- Gensim
- Full vocabs were used instead of using only 50000 common words as the paper did.
- Adaptive softmax were adopted instead of cross entropy in order to speed up training process.
- The model was trained on one 1080 ti, and it took 2 days for 100 epochs.
- Beam Search method is not parallel.
- Run
data/word2vec.ipynb
to create pretrained word embedding files. - Run
mta-lstm.ipynb
to train the model.
- Topics: 現在 未來 夢想 科學 文化
我的夢想是長大後成為一名科學家,為實現自己的理想努力奮鬥。我要好好學習科學文化知識,長大後成為國家的棟樑之才。我堅信,只要我們努力學習科學文化知識,長大後,我們的未來一定會更加美好。
- Topics: 美麗 夏天 人們 玩耍 來臨
夏天,是一個美麗的季節。夏天,樹木長得蔥蔥蘢,枝繁葉茂。夏天,池塘裡的水清了,孩子們也可以在水裡自由字在地玩耍了。小朋友們在這裡捉迷藏、嬉戲、玩耍、玩耍。