/Transformer-MM-Explainability

Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.

Primary LanguageJupyter NotebookMIT LicenseMIT

Watchers

No one’s watching this repository yet.