/Networks-Beyond-Attention

A compilation of network architectures for vision and others without usage of self-attention mechanism

Apache License 2.0Apache-2.0

Networks-Beyond-Attention (NBA)

A list of modern (convolutional) network architectures for vision. Note that we only list the works based on convolution, modulation or other variants that emerge most recently. Please refer to other more comprehensive lists about networks using attention or MLP-style designs.

Since it is a new trend, so feel free to submit a pull request or raise an issue if you find any missed papers!

Overview

Papers

Image Classification

On the Connection between Local Attention and Dynamic Depth-wise Convolution. ICLR 2022.
Qi Han, Zejia Fan, Qi Dai, Lei Sun, Ming-Ming Cheng, Jiaying Liu, Jingdong Wang.
Release date: 8 June 2021.
[paper] [code]

MetaFormer Is Actually What You Need for Vision. CVPR 2022.
Weihao Yu, Mi Luo, Pan Zhou, Chenyang Si, Yichen Zhou, Xinchao Wang, Jiashi Feng, Shuicheng Yan.
Release date: 22 Nov 2021.
[paper] [code]

A ConvNet for the 2020s. CVPR 2022.
Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie.
Release date: 10 Jan 2022.
[paper] [code]

Visual Attention Network. arXiv 2022.
Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu.
Release date: 20 Feb 2022.
[paper] [code]

Scaling Up Your Kernels to 31x31: Revisiting Large Kernel Design in CNNs. CVPR 2022.
Xiaohan Ding, Xiangyu Zhang, Yizhuang Zhou, Jungong Han, Guiguang Ding, Jian Sun.
Release date: 13 Mar 2022.
[paper] [code]

Focal Modulation Networks. NeurIPS 2022.
Jianwei Yang, Chunyuan Li, Xiyang Dai, Jianfeng Gao.
Release date: 22 Mar 2022.
[paper] [code]

More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity. arXiv 2022.
Shiwei Liu, Tianlong Chen, Xiaohan Chen, Xuxi Chen, Qiao Xiao, Boqian Wu, Mykola Pechenizkiy, Decebal Mocanu, Zhangyang Wang.
Release date: 7 July 2022.
[paper] [code]

HorNet: Efficient High-Order Spatial Interactions with Recursive Gated Convolutions. NeurIPS 2022.
Yongming Rao, Wenliang Zhao, Yansong Tang, Jie Zhou, Ser-Nam Lim, Jiwen Lu.
Release date: 28 July 2022.
[paper] [code]

Efficient Multi-order Gated Aggregation Network. arXiv 2022.
Siyuan Li, Zedong Wang, Zicheng Liu, Cheng Tan, Haitao Lin, Di Wu, Zhiyuan Chen, Jiangbin Zheng, Stan Z. Li.
Release date: 7 Nov 2022.
[paper] [code]

InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions. arXiv 2022.
Wenhai Wang, Jifeng Dai, Zhe Chen, Zhenhang Huang, Zhiqi Li, Xizhou Zhu, Xiaowei Hu, Tong Lu, Lewei Lu, Hongsheng Li, Xiaogang Wang, Yu Qiao.
Release date: 10 Nov 2022.
[paper] [code]

Conv2Former: A Simple Transformer-Style ConvNet for Visual Recognition. arXiv 2022.
Qibin Hou, Cheng-Ze Lu, Ming-Ming Cheng, Jiashi Feng.
Release date: 22 Nov 2022.
[paper] [code]

A Close Look at Spatial Modeling: From Attention to Convolution. arXiv 2022.
Xu Ma, Huan Wang, Can Qin, Kunpeng Li, Xingchen Zhao, Jie Fu, Yun Fu.
Release date: 23 Dec 2022.
[paper] [code]

ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders. arXiv 2023.
Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie.
Release date: 2 Jan 2023.
[paper] [code]

Image Segmentation

SegNeXt: Rethinking Convolutional Attention Design for Semantic Segmentation. NeurIPS 2022.
Meng-Hao Guo, Cheng-Ze Lu, Qibin Hou, Zhengning Liu, Ming-Ming Cheng, Shi-Min Hu.
Release date: 18 Sep 2022.
[paper] [code]

3D Understanding

Scaling up Kernels in 3D CNNs. arXiv 2022.
Yukang Chen, Jianhui Liu, Xiaojuan Qi, Xiangyu Zhang, Jian Sun, Jiaya Jia.
Release date: 21 June 2022.
[paper] [code]

Long Range Pooling for 3D Large-Scale Scene Understanding. arXiv 2023.
Xiang-Li Li, Meng-Hao Guo, Tai-Jiang Mu, Ralph R. Martin, Shi-Min Hu
Release date: 17 Jan 2023.
[paper] [code]

Others

LKD-Net: Large Kernel Convolution Network for Single Image Dehazing. arXiv 2022.
Pinjun Luo, Guoqiang Xiao, Xinbo Gao, Song Wu.
Release date: 5 Sep 2022.
[paper] [code]

Related Awesome Paper Lists

Awesome Visual-Transformer: Awesome Visual-Transformer.

Ultimate-Awesome-Transformer-Attention: Ultimate-Awesome-Transformer-Attention.

Transformer-in-Vision: Transformer-in-Vision.

Acknowledgement

The list format follows awesome-detection-transformer.