Preprint Link: Neighborhood Attention Transformer
By Ali Hassani[1, 2], Steven Walton[1, 2], Jiachen Li[1,2], Shen Li[3], and Humphrey Shi[1,2]
In association with SHI Lab @ University of Oregon & UIUC[1] and Picsart AI Research (PAIR)[2], and Meta/Facebook AI[3]
- NA CUDA extension v0.12 released.
- NA runs much more efficiently now, up to 40% faster and uses up to 25% less memory compared to Swin Transformer’s Shifted Window Self Attention.
- Improved FP16 throughput.
- Improved training speed and stability.
- See changelog.
- 1-D Neighborhood Attention support added!
- Moved the kernel to
natten/
now, since there's a single version for all three tasks, and we're adding more features to the extension.
- NA CUDA extension v0.11 released.
- It's faster in both training and inference,
- with a single version for all three tasks (no downstream-specific version)
- PyTorch implementation released
- Works both with and without CUDA, but not very efficient. Try to use the CUDA extension when possible.
- See changelog.
- Neighborhood Attention 1D (CUDA)
- Neighborhood Attention 2D (CUDA)
- Neighborhood Attention 2D (PyTorch)
- Zeros/Valid padding support
- HuggingFace Demo
We present Neighborhood Attention Transformer (NAT), an efficient, accurate and scalable hierarchical transformer that works well on both image classification and downstream vision tasks. It is built upon Neighborhood Attention (NA), a simple and flexible attention mechanism that localizes the receptive field for each query to its nearest neighboring pixels. NA is a localization of self-attention, and approaches it as the receptive field size increases. It is also equivalent in FLOPs and memory usage to Swin Transformer's shifted window attention given the same receptive field size, while being less constrained. Furthermore, NA includes local inductive biases, which eliminate the need for extra operations such as pixel shifts. Experimental results on NAT are competitive; NAT-Tiny reaches 83.2% top-1 accuracy on ImageNet with only 4.3 GFLOPs and 28M parameters, 51.4% mAP on MS-COCO and 48.4% mIoU on ADE20k.
Natural Attention localizes the query's (red) receptive field to its nearest neighborhood (green). This is equivalent to dot-product self attention when the neighborhood size is identical to the image dimensions. Note that the edges are special (edge) cases.
We wrote a PyTorch CUDA extension to parallelize NA. It's relatively fast, very memory-efficient, and supports half precision. There's still a lot of room for improvement, so feel free to open PRs and contribute! We've also released a pure-torch version of Neighborhood Attention recently.
Model | # of Params | FLOPs | Top-1 |
---|---|---|---|
NAT-Mini | 20M | 2.7G | 81.8% |
NAT-Tiny | 28M | 4.3G | 83.2% |
NAT-Small | 51M | 7.8G | 83.7% |
NAT-Base | 90M | 13.7G | 84.3% |
Details on training and validation are provided in classification.
Backbone | Network | # of Params | FLOPs | mAP | Mask mAP | Checkpoint |
---|---|---|---|---|---|---|
NAT-Mini | Mask R-CNN | 40M | 225G | 46.5 | 41.7 | Download |
NAT-Tiny | Mask R-CNN | 48M | 258G | 47.7 | 42.6 | Download |
NAT-Small | Mask R-CNN | 70M | 330G | 48.4 | 43.2 | Download |
NAT-Mini | Cascade Mask R-CNN | 77M | 704G | 50.3 | 43.6 | Download |
NAT-Tiny | Cascade Mask R-CNN | 85M | 737G | 51.4 | 44.5 | Download |
NAT-Small | Cascade Mask R-CNN | 108M | 809G | 52.0 | 44.9 | Download |
NAT-Base | Cascade Mask R-CNN | 147M | 931G | 52.3 | 45.1 | Download |
Details on training and validation are provided in detection.
Backbone | Network | # of Params | FLOPs | mIoU | mIoU (multi-scale) | Checkpoint |
---|---|---|---|---|---|---|
NAT-Mini | UPerNet | 50M | 900G | 45.1 | 46.4 | Download |
NAT-Tiny | UPerNet | 58M | 934G | 47.1 | 48.4 | Download |
NAT-Small | UPerNet | 82M | 1010G | 48.0 | 49.5 | Download |
NAT-Base | UPerNet | 123M | 1137G | 48.5 | 49.7 | Download |
Details on training and validation are provided in segmentation.
Original | ViT | Swin | NAT |
---|---|---|---|
@article{hassani2022neighborhood,
title = {Neighborhood Attention Transformer},
author = {Ali Hassani and Steven Walton and Jiachen Li and Shen Li and Humphrey Shi},
year = 2022,
url = {https://arxiv.org/abs/2204.07143},
eprint = {2204.07143},
archiveprefix = {arXiv},
primaryclass = {cs.CV}
}