awesome-vision-transformers-plus

A curated list of papers linked to research based on self-attention for computer vision tasks, built on awesome-transformer-for-vision

Contents

Papers and Resources

Transformer

Attention Is All You Need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, Illia Polosukhin. NeurIPS 2017.

The Annotated Transformer

The Illustrated Transformer

Self-attention Augmented CNNs

Non-local Neural Networks. Xiaolong Wang, Ross Girshick, Abhinav Gupta, Kaiming He. CVPR 2018.

GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond. Yue Cao, Jiarui Xu, Stephen Lin, Fangyun Wei, Han Hu. ICCVW 2019.

CCNet: Criss-Cross Attention for Semantic Segmentation. Zilong Huang, Xinggang Wang, Yunchao Wei, Lichao Huang, Humphrey Shi, Wenyu Liu, Thomas S. Huang. ICCV 2019.

An Empirical Study of Spatial Attention Mechanisms in Deep Networks. Xizhou Zhu, Dazhi Cheng, Zheng Zhang, Stephen Lin, Jifeng Dai. ICCV 2019.

Attention Augmented Convolutional Networks. Irwan Bello, Barret Zoph, Ashish Vaswani, Jonathon Shlens, Quoc V. Le. ICCV 2019.

Disentangled Non-Local Neural Networks. Minghao Yin, Zhuliang Yao, Yue Cao, Xiu Li, Zheng Zhang, Stephen Lin, Han Hu. ECCV 2020.

Early Attempts

Local Relation Networks for Image Recognition. Han Hu, Zheng Zhang, Zhenda Xie, Stephen Lin. ICCV 2019.

Stand-Alone Self-Attention in Vision Models. Prajit Ramachandran, Niki Parmar, Ashish Vaswani, Irwan Bello, Anselm Levskaya, Jonathon Shlens. NIPS 2019.

Exploring Self-attention for Image Recognition. Hengshuang Zhao, Jiaya Jia, Vladlen Koltun. CVPR 2020.

Axial Attention in Multidimensional Transformers. Jonathan Ho, Nal Kalchbrenner, Dirk Weissenborn, Tim Salimans. Arxiv 2019.

2D Vision Tasks

Classification

An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. ICLR 2021.

Training data-efficient image transformers & distillation through attention. Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou. Arxiv 2020.

Bottleneck Transformers for Visual Recognition. Aravind Srinivas, Tsung-Yi Lin, Niki Parmar, Jonathon Shlens, Pieter Abbeel, Ashish Vaswani. Arxiv 2021.

Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet. Li Yuan, Yunpeng Chen, Tao Wang, Weihao Yu, Yujun Shi, Francis EH Tay, Jiashi Feng, Shuicheng Yan. Arxiv 2021.

Detection

Toward Transformer-Based Object Detection. Josh Beal, Eric Kim, Eric Tzeng, Dong Huk Park, Andrew Zhai, Dmitry Kislyuk. Arxiv 2020.

Rethinking Transformer-based Set Prediction for Object Detection. Zhiqing Sun, Shengcao Cao, Yiming Yang, Kris Kitani. Arxiv 2020.

End-to-End Object Detection with Transformers. Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. ECCV 2020.

Deformable DETR: Deformable Transformers for End-to-End Object Detection. Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai. ICLR 2021.

UP-DETR: Unsupervised Pre-training for Object Detection with Transformers. Zhigang Dai, Bolun Cai, Yugeng Lin, Junying Chen. Arxiv 2020.

End-to-End Object Detection with Adaptive Clustering Transformer. Minghang Zheng, Peng Gao, Xiaogang Wang, Hongsheng Li, Hao Dong. Arxiv 2020.

Fast Convergence of DETR with Spatially Modulated Co-Attention. Peng Gao, Minghang Zheng, Xiaogang Wang, Jifeng Dai, Hongsheng Li. Arxiv 2021.

Segmentation

Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation. Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen. ECCV 2020.

Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers. Sixiao Zheng, Jiachen Lu, Hengshuang Zhao, Xiatian Zhu, Zekun Luo, Yabiao Wang, Yanwei Fu, Jianfeng Feng, Tao Xiang, Philip H.S. Torr, Li Zhang. Arxiv 2020.

End-to-End Video Instance Segmentation with Transformers. Yuqing Wang, Zhaoliang Xu, Xinlong Wang, Chunhua Shen, Baoshan Cheng, Hao Shen, Huaxia Xia. Arxiv 2020.

SSTVOS: Sparse Spatiotemporal Transformers for Video Object Segmentation. Brendan Duke, Abdalla Ahmed, Christian Wolf, Parham Aarabi, Graham W. Taylor. Arxiv 2021.

Tracking

TransTrack: Multiple-Object Tracking with Transformer. Peize Sun, Yi Jiang, Rufeng Zhang, Enze Xie, Jinkun Cao, Xinting Hu, Tao Kong, Zehuan Yuan, Changhu Wang, Ping Luo. Arxiv 2020.

Image Generation

Image Transformer. Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Łukasz Kaiser, Noam Shazeer, Alexander Ku, Dustin Tran. ICML 2018.

Taming Transformers for High-Resolution Image Synthesis. Patrick Esser, Robin Rombach, Bjorn Ommer. Arxiv 2020.

Image Processing

Learning Texture Transformer Network for Image Super-Resolution. Fuzhi Yang, Huan Yang, Jianlong Fu, Hongtao Lu, Baining Guo. CVPR 2020.

Learning Joint Spatial-Temporal Transformations for Video Inpainting. Yanhong Zeng, Jianlong Fu, Hongyang Chao. ECCV 2020.

Colorization Transformer. Manoj Kumar, Dirk Weissenborn, Nal Kalchbrenner. ICLR 2021.

Pre-Trained Image Processing Transformer. Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, Wen Gao. Arxiv 2020.

Action Understanding

Video Action Transformer Network. Rohit Girdhar, Joao Carreira, Carl Doersch, Andrew Zisserman. CVPR 2019.

Video Transformer Network. Daniel Neimark, Omri Bar, Maya Zohar, Dotan Asselmann. Arxiv 2021.

3D Vision Tasks

Point Cloud Processing

PCT: Point Cloud Transformer. Meng-Hao Guo, Jun-Xiong Cai, Zheng-Ning Liu, Tai-Jiang Mu, Ralph R. Martin, Shi-Min Hu. Arxiv 2020.

Point Transformer. Hengshuang Zhao, Li Jiang, Jiaya Jia, Philip Torr, Vladlen Koltun. Arxiv 2020.

Motion Modeling

Learning to Generate Diverse Dance Motions with Transformer. Jiaman Li, Yihang Yin, Hang Chu, Yi Zhou, Tingwu Wang, Sanja Fidler, Hao Li. Arxiv 2020.

A Spatio-temporal Transformer for 3D Human Motion Prediction. Emre Aksan, Peng Cao, Manuel Kaufmann, Otmar Hilliges. Arxiv 2020.

Human Body Modeling

End-to-End Human Pose and Mesh Reconstruction with Transformers. Kevin Lin, Lijuan Wang, Zicheng Liu. Arxiv 2020.

Theory

On the Relationship between Self-Attention and Convolutional Layers. Jean-Baptiste Cordonnier, Andreas Loukas, Martin Jaggi. ICLR 2020.

Survey

A Survey on Visual Transformer. Kai Han, Yunhe Wang, Hanting Chen, Xinghao Chen, Jianyuan Guo, Zhenhua Liu, Yehui Tang, An Xiao, Chunjing Xu, Yixing Xu, Zhaohui Yang, Yiman Zhang, Dacheng Tao. Arxiv 2020.

Transformers in Vision: A Survey. Salman Khan, Muzammal Naseer, Munawar Hayat, Syed Waqas Zamir, Fahad Shahbaz Khan, Mubarak Shah. Arxiv 2021.

Others

Music Transformer: Generating Music with Long-Term Structure. Cheng-Zhi Anna Huang, Ashish Vaswani, Jakob Uszkoreit, Noam Shazeer, Ian Simon, Curtis Hawthorne, Andrew M. Dai, Matthew D. Hoffman, Monica Dinculescu, Douglas Eck. ICLR 2019.

Decoupling the Role of Data, Attention, and Losses in Multimodal Transformers. Lisa Anne Hendricks, John Mellor, Rosalia Schneider, Jean-Baptiste Alayrac, Aida Nematzadeh. Arxiv 2021.