- To run the code, Python 3.6 and PyTorch 0.2 are needed
- The training code is present in the file
mtl_learning.py
whereas the testing code is present in the filemtl_testing.py
- The code in
mtl_learning.py
is pretty much self documented - For the training, the datasets should be present as pickle files in the
pickles/
directory. The embeddings file should be present in thedata/
directory. - In case of reloading the models and continuing the training, the models should be placed in
reloads/
directory. - When all the necessary files and data are present, simply run
python mtl_learning.py
for training andpython mtl_testing.py
for testing. - The outputs of the training would be present in a file named
outputs.txt
and that of testing would be present intest_output.txt
The project is inspired from Pasunuru et al's (2017) work on "Towards Improving Abstractive Summarization via Entailment Generation" and Sean Robertson's tutorial on seq2seq translation.