This fork contains some bugfixes + support for resuming and fine tuning models
This PyTorch package implements the Stochastic Answer Network (SAN) for Machine Reading Comprehension, as described in:
Xiaodong Liu, Yelong Shen, Kevin Duh, Jianfeng Gao
Stochastic Answer Networks for Machine Reading Comprehension
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
arXiv version
Xiaodong Liu, Wei Li, Yuwei Fang, Aerin Kim, Kevin Duh and Jianfeng Gao
Stochastic Answer Networks for SQuAD 2.0
Technical Report
arXiv version
Please cite the above papers if you use this code.
- python3.6
- install requirements:
pip install -r requirements.txt
- download data/word2vec
sh download.sh
- You might need to download the en module for spacy
python -m spacy download en # default English model (~50MB)
python -m spacy download en_core_web_md # larger English model (~1GB)
Or pull our published docker: allenlao/pytorch-allennlp-rt
- preprocess data
python prepro.py
- train a model
python train.py
- preprocess data
python prepro.py --v2_on
- train a Model
python train.py --v2_on --dev_gold data\dev-v2.0.json
- download ELMo resource from AllenNLP
- train a Model with ELMo
python train.py --elmo_on
Note that we only tested on SQuAD v1.1.
- Multi-Task Training.
- Add BERT.
Some of code are adapted from: https://github.com/hitvoice/DrQA
ELMo is from: https://allennlp.org
We report results produced by this package as follows.
Dataset | EM/F1 on Dev |
---|---|
SQuAD v1.1 (Rajpurkar et al., 2016) |
76.8/84.6 (vs 76.2/84.1 SAN paper) |
SQuAD v2.0 (Rajpurkar et al., 2018) |
69.5/72.7 (Official Submission of SQUAD v2) |
NewsQA (Trischler et al., 2016) |
55.8/67.9 |
Related: