BertViz is a tool for visualizing attention in Transformer models, supporting most models from the HuggingFace library (BERT, GPT-2, RoBERTa, BART, etc.). It extends the Tensor2Tensor visualization tool by Llion Jones and the transformers library from HuggingFace.
⚡️ Quickstart | 🕹️ Colab tutorial | 📖 Documentation | ✍️ Blog post | 🔬 Paper
The head view visualizes attention for one or more attention heads in the same layer. It is based on the excellent Tensor2Tensor visualization tool by Llion Jones.
🕹 Try out this interactive Colab tutorial for the head view.
The model view shows a birds-eye view of attention across all layers and heads.
🕹 Try out this interactive Colab tutorial for the model view.
The neuron view visualizes individual neurons in the query and key vectors and shows how they are used to compute attention.
🕹 Try out this interactive Colab tutorial for the neuron view.
pip install bertviz
You must also have Jupyter Notebook and ipywidgets installed in order to run BertViz in a notebook:
pip install jupyterlab
pip install ipywidgets
For more details on installing Jupyter or ipywidgets, consult the documentation here and here.
Start Jupyter Notebook:
jupyter notebook
Click New
to create a new notebook, and select Python 3 (ipykernel)
if prompted.
Add the following cell:
from transformers import AutoTokenizer, AutoModel, utils
from bertviz import model_view
utils.logging.set_verbosity_error() # Remove line to see warnings
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
model = AutoModel.from_pretrained("distilbert-base-uncased", output_attentions=True)
inputs = tokenizer.encode("The cat sat on the mat", return_tensors='pt')
outputs = model(inputs)
attention = outputs[-1] # Output includes attention weights when output_attentions=True
tokens = tokenizer.convert_ids_to_tokens(inputs[0])
model_view(attention, tokens)
And run it (Shift + Enter
)! The visualization may take a few seconds to load.
See Documentation for additional examples and advanced features.
If you wish to run BertViz in Google Colab, simply add the following cell before the above cell:
!pip install bertviz
You may also run any of the sample notebooks:
git clone --depth 1 git@github.com:jessevig/bertviz.git
cd bertviz/notebooks
jupyter notebook
Check out this Colab notebook for an interactive tutorial on BertViz.
Example notebooks for specific use cases:
Head View: BERT (Notebook, Colab) • GPT-2 (Notebook, Colab) • XLNet (Notebook) • RoBERTa (Notebook) • XLM (Notebook) • ALBERT (Notebook) • DistilBERT (Notebook) • BART (Notebook)
Model View: BERT (Notebook, Colab) • GPT2 (Notebook, Colab) • XLNet (Notebook) • RoBERTa (Notebook) • XLM (Notebook) • ALBERT (Notebook) • DistilBERT (Notebook) • BART (Notebook)
Neuron View*: BERT (Notebook, Colab) • GPT-2 (Notebook, Colab) • RoBERTa (Notebook)
*The neuron view only supports the 3 models listed (see neuron view documentation) while the head view and model view support most Huggingface models.
- Self-Attention Models (BERT, GPT-2, etc.)
- Encoder-Decoder Models (BART, MarianMT, etc.)
- Installing from source
- Additional options
First load a Huggingface model, either a pre-trained model as shown below, or your own fine-tuned model.
Be sure to set output_attention=True
.
from transformers import AutoTokenizer, AutoModel, utils
utils.logging.set_verbosity_error() # Remove this line to see warnings
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModel.from_pretrained("bert-base-uncased", output_attentions=True)
Then prepare inputs and compute attention:
inputs = tokenizer.encode("The cat sat on the mat", return_tensors='pt')
outputs = model(inputs)
attention = outputs[-1] # Output includes attention weights when output_attentions=True
tokens = tokenizer.convert_ids_to_tokens(inputs[0])
Finally, display the attention weights using the head_view
or model_view
function:
from bertviz import head_view
head_view(attention, tokens)
For more advanced use cases, e.g., specifying a two-sentence input to the model, please refer to the sample notebooks.
The neuron view is invoked differently than the head view or model view, due to requiring access to the model's query/key vectors, which are not returned through the Huggingface API. It is currently limited to custom versions of BERT, GPT-2, and RoBERTa included with BertViz.
# Import specialized versions of models (that return query/key vectors)
from bertviz.transformers_neuron_view import BertModel, BertTokenizer
from bertviz.neuron_view import show
model_type = 'bert'
model_version = 'bert-base-uncased'
do_lower_case = True
sentence_a = "The cat sat on the mat"
sentence_b = "The cat lay on the rug"
model = BertModel.from_pretrained(model_version, output_attentions=True)
tokenizer = BertTokenizer.from_pretrained(model_version, do_lower_case=do_lower_case)
show(model, model_type, tokenizer, sentence_a, sentence_b, layer=2, head=0)
The head view and model view both support encoder-decoder models.
First, load an encoder-decoder model:
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de")
model = AutoModel.from_pretrained("Helsinki-NLP/opus-mt-en-de", output_attentions=True)
Then prepare the inputs and compute attention:
encoder_input_ids = tokenizer("She sees the small elephant.", return_tensors="pt", add_special_tokens=True).input_ids
decoder_input_ids = tokenizer("Sie sieht den kleinen Elefanten.", return_tensors="pt", add_special_tokens=True).input_ids
outputs = model(input_ids=encoder_input_ids, decoder_input_ids=decoder_input_ids)
encoder_text = tokenizer.convert_ids_to_tokens(encoder_input_ids[0])
decoder_text = tokenizer.convert_ids_to_tokens(decoder_input_ids[0])
Finally, display the visualization using either head_view
or model_view
.
from bertviz import model_view
model_view(
encoder_attention=outputs.encoder_attentions,
decoder_attention=outputs.decoder_attentions,
cross_attention=outputs.cross_attentions,
encoder_tokens= encoder_text,
decoder_tokens = decoder_text
)
You may select Encoder
, Decoder
, or Cross
attention from the drop-down in the upper left corner of the visualization.
git clone https://github.com/jessevig/bertviz.git
cd bertviz
python setup.py develop
The model view and neuron view support dark (default) and light modes. You may set the mode using
the display_mode
parameter:
model_view(attention, tokens, display_mode="light")
To improve the responsiveness of the tool when visualizing larger models or inputs, you may set the include_layers
parameter to restrict the visualization to a subset of layers (zero-indexed). This option is available in the head view and model
view.
Example: Render model view with only layers 5 and 6 displayed
model_view(attention, tokens, include_layers=[5, 6])
For the model view, you may also restrict the visualization to a subset of attention heads (zero-indexed) by setting the
include_heads
parameter.
In the head view, you may choose a specific layer
and collection of heads
as the default selection when the
visualization first renders. Note: this is different from the include_heads
/include_layers
parameter (above), which
removes layers and heads from the visualization completely.
Example: Render head view with layer 2 and heads 3 and 5 pre-selected
head_view(attention, tokens, layer=2, heads=[3,5])
You may also pre-select a specific layer
and single head
for the neuron view.
The head_view
and model_view
functions may technically be used to visualize self-attention for any Transformer model,
as long as the attention weights are available and follow the format specified in model_view
and head_view
(which is the format
returned from Huggingface models). In some case, Tensorflow checkpoints may be loaded as Huggingface models as described in the
Huggingface docs.
- This tool is designed for shorter inputs and may run slowly if the input text is very long and/or the model is very large.
To mitigate this, you may wish to filter the layers displayed by setting the
include_layers
parameter, as described above. - When running on Colab, some of the visualizations will fail (runtime disconnection) when the input text is long. To mitigate this, you may wish to filter the layers displayed by setting the
include_layers
parameter, as described above. - The neuron view only supports the custom BERT, GPT-2, and RoBERTa models included with the tool. This view needs access to the query and key vectors,
which required modifying the model code (see
transformers_neuron_view
directory), which has only been done for these three models. Also, only one neuron view may be included per notebook.
Visualizing attention weights illuminates a particular mechanism within the model architecture but does not necessarily provide a direct explanation for model predictions. See [1, 2, 3].
Jesse Vig (homepage)
A Multiscale Visualization of Attention in the Transformer Model (ACL System Demonstrations 2019).
@inproceedings{vig-2019-multiscale,
title = "A Multiscale Visualization of Attention in the Transformer Model",
author = "Vig, Jesse",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/P19-3007",
doi = "10.18653/v1/P19-3007",
pages = "37--42",
}
This project is licensed under the Apache 2.0 License - see the LICENSE file for details
We are grateful to the authors of the following projects, which are incorporated into this repo: