An interactive visualization system designed to help NLP researchers and practitioners analyze and compare attention weights in transformer-based models with linguistic knowledge.
For more information, check out our manuscript:
Dodrio: Exploring Transformer Models with Interactive Visualization. Zijie J. Wang, Robert Turko, and Duen Horng Chau. arXiv preprint 2021. arXiv:2103.14625.
For a live demo, visit: http://poloclub.github.io/dodrio/
Clone or download this repository:
git clone git@github.com:poloclub/dodrio.git
# use degit if you don't want to download commit histories
degit poloclub/dodrio
Install the dependencies:
npm install
Then run Dodrio:
npm run dev
Navigate to localhost:5000. You should see Dodrio running in your broswer :)
To see how we trained the Transformer or customize the visualization with a different model or dataset, visit the ./data-generation/
directory.
Dodrio was created by Jay Wang, Robert Turko, and Polo Chau.
@article{wangDodrioExploringTransformer2021,
title = {Dodrio: {{Exploring Transformer Models}} with {{Interactive Visualization}}},
shorttitle = {Dodrio},
author = {Wang, Zijie J. and Turko, Robert and Chau, Duen Horng},
year = {2021},
month = mar,
url = {http://arxiv.org/abs/2103.14625},
archiveprefix = {arXiv},
eprint = {2103.14625},
journal = {arXiv:2103.14625}
}
The software is available under the MIT License.
If you have any questions, feel free to open an issue or contact Jay Wang.