hila-chefer/Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Jupyter NotebookMIT
Stargazers
- alanyannickCS @ UC Davis
- alexa19lab
- daisukelabKawasaki, Kanagawa, Japan
- dlivshen
- dpflann
- ebigram
- Edwardmark
- fjibjNanjing
- fly51flyPRIS
- GoGoDuck912
- haofanwangCarnegie Mellon University
- Hironobu-KawaguchiTokyo, Japan
- hnishiACCESS CO.,LTD.
- imvladikonIsrael
- KaiserLew
- KeremTurgutlu
- krmiddlebrookSan Diego
- kunzhanLanzhou University
- lalitpagariaBlue Pencil Strategies
- LegendBCHuazhong Uni. of Sci. and Tec.
- LuoweiZhouGoogle
- miracle-wang819
- ModMorphModMorph.AI
- mymuli南洋理工大学|中山大学|北京邮电大学
- Norod
- onuriel
- rishikksh20Dubpro.ai
- ShiYayaUSTC; CASIA
- shouyan
- sokazakiTokyo, Japan
- sungam94Germany
- talsSan Francisco
- TheodoreGalanosAustrian Institute of Technology
- torridgristle
- vipermuKrea
- vk0stUBRD