End-To-End Memory Networks in Tensorflow
Tensorflow implementation of End-To-End Memory Networks for language modeling (see Section 5 of the paper). The original torch code can be found here.
Prerequisites
This code requires Tensorflow. There is a set of sample Penn Tree Bank (PTB) corpus in data
directory, which is a popular benchmark for measuring quality of these models. But you can use your own text data set which should be formated like this.
Usage
To train a model with 6 hops and memory size of 100 (best model described in the paper), run the following command:
$ python main.py --nhop 6 --memsize 100
To see all training options, run:
$ python main.lua --help
(Optional) If you want to see a progress bar, install progress
with pip
:
$ pip install progress
$ python main.py --show --nhop 6 --memsize 100
Author
Taehoon Kim / @carpedm20