/ParlAI

A framework for training and evaluating AI models on a variety of openly available dialogue datasets.

Primary LanguagePythonMIT LicenseMIT

MIT Licensed PyPI CircleCI Coverage GitHub contributors Twitter

ParlAI (pronounced “par-lay”) is a python framework for sharing, training and testing dialogue models, from open-domain chitchat to VQA (Visual Question Answering).

Its goal is to provide researchers:

ParlAI is described in the following paper: “ParlAI: A Dialog Research Software Platform", arXiv:1705.06476 or see these more up-to-date slides.

See the news page for the latest additions & updates, and the website http://parl.ai for further docs.

Installing ParlAI

ParlAI currently requires Python3.6 and Pytorch 1.4. It does not work with pytorch 1.5. Dependencies of the core modules are listed in requirements.txt. Some models included (in parlai/agents) have additional requirements.

Run the following commands to clone the repository and install ParlAI:

git clone https://github.com/facebookresearch/ParlAI.git ~/ParlAI
cd ~/ParlAI; python setup.py develop

This will link the cloned directory to your site-packages.

This is the recommended installation procedure, as it provides ready access to the examples and allows you to modify anything you might need. This is especially useful if you want to submit another task to the repository.

All needed data will be downloaded to ~/ParlAI/data, and any non-data files if requested will be downloaded to ~/ParlAI/downloads. If you need to clear out the space used by these files, you can safely delete these directories and any files needed will be downloaded again.

Documentation

Examples

A large set of scripts can be found in parlai/scripts. Here are a few of them. Note: If any of these examples fail, check the requirements section to see if you have missed something.

Display 10 random examples from the SQuAD task

parlai display_data -t squad

Evaluate an IR baseline model on the validation set of the Personachat task:

parlai eval_model -m ir_baseline -t personachat -dt valid

Train a single layer transformer on PersonaChat (requires pytorch and torchtext). Detail: embedding size 300, 4 attention heads, 2 epochs using batchsize 64, word vectors are initialized with fasttext and the other elements of the batch are used as negative during training.

parlai train_model -t personachat -m transformer/ranker -mf /tmp/model_tr6 --n-layers 1 --embedding-size 300 --ffn-size 600 --n-heads 4 --num-epochs 2 -veps 0.25 -bs 64 -lr 0.001 --dropout 0.1 --embedding-type fasttext_cc --candidates batch

Code Organization

The code is set up into several main directories:

  • core: contains the primary code for the framework
  • agents: contains agents which can interact with the different tasks (e.g. machine learning models)
  • scripts: contains a number of useful scripts, like training, evaluating, interactive chatting, ...
  • tasks: contains code for the different tasks available from within ParlAI
  • mturk: contains code for setting up Mechanical Turk, as well as sample MTurk tasks
  • messenger: contains code for interfacing with Facebook Messenger
  • zoo: contains code to directly download and use pretrained models from our model zoo

Support

If you have any questions, bug reports or feature requests, please don't hesitate to post on our Github Issues page.

The Team

ParlAI is currently maintained by Emily Dinan, Dexter Ju, Margaret Li, Spencer Poff, Pratik Ringshia, Stephen Roller, Kurt Shuster, Eric Michael Smith, Jack Urbanek, Jason Weston, Mary Williamson, and Jing Xu.

Former major contributors and maintainers include Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Antoine Bordes, Devi Parikh, Dhruv Batra, Filipe de Avila Belbute Peres, Chao Pan, and Vedant Puri.

Citation

Please cite the arXiv paper if you use ParlAI in your work:

@article{miller2017parlai,
  title={ParlAI: A Dialog Research Software Platform},
  author={{Miller}, A.~H. and {Feng}, W. and {Fisch}, A. and {Lu}, J. and {Batra}, D. and {Bordes}, A. and {Parikh}, D. and {Weston}, J.},
  journal={arXiv preprint arXiv:{1705.06476}},
  year={2017}
}

License

ParlAI is MIT licensed. See the LICENSE file for details.