jiant
is a software toolkit for natural language processing research, designed to facilitate work on multitask learning and transfer learning for sentence understanding tasks.
A few things you might want to know about jiant
:
jiant
is configuration-driven. You can run an enormous variety of experiments by simply writing configuration files. Of course, if you need to add any major new features, you can also easily edit or extend the code.jiant
contains implementations of strong baselines for the GLUE and SuperGLUE benchmarks, and it's the recommended starting point for work on these benchmarks.jiant
was developed at the 2018 JSALT Workshop by the General-Purpose Sentence Representation Learning team and is maintained by the NYU Machine Learning for Language Lab, with help from many outside collaborators (especially Google AI Language's Ian Tenney).jiant
is built on PyTorch. It also uses many components from AllenNLP and the HuggingFace Transformers implementations for GPT, BERT and other transformer models.- The name
jiant
doesn't mean much. The 'j' stands for JSALT. That's all the acronym we have.
To find the setup instructions for using jiant and to run a simple example demo experiment using data from GLUE, follow this getting started tutorial!
Our official documentation is here: https://jiant.info/documentation#/
To run an experiment, make a config file similar to jiant/config/demo.conf
with your model configuration. In addition, you can use the --overrides
flag to override specific variables. For example:
python main.py --config_file jiant/config/demo.conf \
--overrides "exp_name = my_exp, run_name = foobar, d_hid = 256"
will run the demo config, but output to $JIANT_PROJECT_PREFIX/my_exp/foobar
.
To run the demo config, you will have to set environment variables. The best way to achieve that is to follow the instructions in user_config_template.sh
$JIANT_PROJECT_PREFIX
: the where the outputs will be saved.$JIANT_DATA_DIR
: location of the saved data. This is usually the location of the GLUE data in a simple default setup.$WORD_EMBS_FILE
: location of any word embeddings you want to use (not necessary when using ELMo, GPT, or BERT). You can download GloVe (840B) here or fastText (2M) here. To haveuser_config.sh
run automatically, follow instructions in scripts/export_from_bash.sh.
If you use jiant
in academic work, please cite it directly:
@misc{wang2019jiant,
author = {Alex Wang and Ian F. Tenney and Yada Pruksachatkun and Katherin Yu and Jan Hula and Patrick Xia and Raghu Pappagari and Shuning Jin and R. Thomas McCoy and Roma Patel and Yinghui Huang and Jason Phang and Edouard Grave and Haokun Liu and Najoung Kim and Phu Mon Htut and Thibault F\'evry and Berlin Chen and Nikita Nangia and Anhad Mohananey and Katharina Kann and Shikha Bordia and Nicolas Patry and David Benton and Ellie Pavlick and Samuel R. Bowman},
title = {\texttt{jiant} 1.2: A software toolkit for research on general-purpose text understanding models},
howpublished = {\url{http://jiant.info/}},
year = {2019}
}
jiant
has been used in these four papers so far:
- Can You Tell Me How to Get Past Sesame Street? Sentence-Level Pretraining Beyond Language Modeling (formerly "Looking for ELMo's Friends")
- What do you learn from context? Probing for sentence structure in contextualized word representations ("edge probing")
- BERT Rediscovers the Classical NLP Pipeline ("BERT layer paper")
- Probing What Different NLP Tasks Teach Machines about Function Word Comprehension ("function word probing")
- Investigating BERT’s Knowledge of Language: Five Analysis Methods with NPIs ("BERT NPI paper")
To exactly reproduce experiments from the ELMo's Friends paper use the jsalt-experiments
branch. That will contain a snapshot of the code as of early August, potentially with updated documentation.
For the edge probing paper and the BERT layer paper, see the probing/ directory.
For the function word probing paper, use this branch and refer to the instructions in the scripts/fwords/ directory.
For the BERT NPI paper follow the instructions in scripts/bert_npi on the blimp-and-npi
branch.
Post an issue here on GitHub if you have any problems, and create a pull request if you make any improvements (substantial or cosmetic) to the code that you're willing to share.
We use the black
coding style with a line limit of 100. After installing the requirements, simply running pre-commit install
should ensure you comply with this in all your future commits. If you're adding features or fixing a bug,
please also add the tests.
For any PR, make sure to update any existing conf
files, tutorials, and scripts to match your changes. If your PR adds or changes functionality that can be directly tested, add or update a test.
For PRs that typical users will need to be aware of, include make a matching PR to the documentation. We will merge that documentation PR once the original PR is merged in and pushed out in a release. (Proposals for better ways to do this are welcome.)
For PRs that change package dependencies, update both environment.yml
(used for conda) and setup.py
(used by pip, and in automatic CircleCI tests).
Releases are identified using git tags and distributed via PyPI for pip installation. After passing CI tests and creating a new git tag for a release, it can be uploaded to PyPI by running:
# create distribution
python setup.py sdist bdist_wheel
# upload to PyPI
python -m twine upload dist/*
More details can be found in setup.py.
This package is released under the MIT License. The material in the allennlp_mods directory is based on AllenNLP, which was originally released under the Apache 2.0 license.
- Part of the development of
jiant
took at the 2018 Frederick Jelinek Memorial Summer Workshop on Speech and Language Technologies, and was supported by Johns Hopkins University with unrestricted gifts from Amazon, Facebook, Google, Microsoft and Mitsubishi Electric Research Laboratories. - This work was made possible in part by a donation to NYU from Eric and Wendy Schmidt made by recommendation of the Schmidt Futures program, and by support from Intuit Inc.
- We gratefully acknowledge the support of NVIDIA Corporation with the donation of a Titan V GPU used at NYU in this work.
- Developer Alex Wang is supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE 1342536. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
- Developer Yada Pruksachatkun is supported by the Moore-Sloan Data Science Environment as part of the NYU Data Science Services initiative.
- Sam Bowman's work on
jiant
during Summer 2019 took place in his capacity as a visiting researcher at Google.