This a collecttion of papers for detection and segmentation with Transformer .
We reorginize the repo by reserach fields.
If you find some overlooked papers or resourses, please open issues or pull requests (recommended).
detrex: A toolbox dedicated for Transforme-based object detectors including DETR, Deformable DETR, DAB-DETR, DN-DETR, DINO, etc.
mmdetection: An open source object detection toolbox including DETR and Deformable DETR.
[DETR] End-to-End Object Detection with Transformers.
Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko.
ECCV 2020.
[paper] [code] [detrex code]
Towards Efficient Use of Multi-Scale Features in Transformer-Based Object Detectors
Gongjie Zhang, Zhipeng Luo, Yingchen Yu, Zichen Tian, Jingyi Zhang, Shijian Lu
arxiv 2022.
[paper] [code]
Semantic-Aligned Matching for Enhanced DETR Convergence and Multi-Scale Feature Fusion
Gongjie Zhang, Zhipeng Luo, Yingchen Yu, Jiaxing Huang, Kaiwen Cui, Shijian Lu, Eric P. Xing
arxiv 2022.
[paper] [code]
Group DETR: Fast DETR Training with Group-Wise One-to-Many Assignment
Qiang Chen, Xiaokang Chen, Jian Wang, Haocheng Feng, Junyu Han, Errui Ding, Gang Zeng, Jingdong Wang
arxiv 2022.
[paper]
DETRs with Hybrid Matching.
Ding Jia, Yuhui Yuan, Haodi He, Xiaopei Wu, Haojun Yu, Weihong Lin, Lei Sun, Chao Zhang, Han Hu
arxiv 2022.
[paper] [code]
Mask DINO: Towards A Unified Transformer-based Framework for Object Detection and Segmentation.
Feng Li*, Hao Zhang*, Huaizhe xu, Shilong Liu, Lei Zhang, Lionel M. Ni, Heung-Yeung Shum.
arxiv 2022.
[paper] [code]
[MIMDet] Unleashing Vanilla Vision Transformer with Masked Image Modeling for Object Detection.
Yuxin Fang*, Shusheng Yang*, Shijie Wang*, Yixiao Ge, Ying Shan, Xinggang Wang
arxiv 2022.
[paper] [code]
DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object Detection.
Hao Zhang*, Feng Li*, Shilong Liu*, Lei Zhang, Hang Su, Jun Zhu, Lionel M. Ni, Heung-Yeung Shum
arxiv 2022.
[paper] [code] [detrex code]
Recurrent Glimpse-based Decoder for Detection with Transformer.
Zhe Chen, Jing Zhang, Dacheng Tao.
CVPR 2022.
[paper] [code]
AdaMixer: A Fast-Converging Query-Based Object Detector.
Ziteng Gao, Limin Wang, Bing Han, Sheng Guo.
CVPR 2022.
[paper] [code]
DN-DETR: Accelerate DETR Training by Introducing Query DeNoising.
Feng Li*, Hao Zhang*, Shilong Liu, Jian Guo, Lionel M. Ni, Lei Zhang.
CVPR 2022.
[paper] [code] [detrex code]
Accelerating DETR Convergence via Semantic-Aligned Matching.
Gongjie Zhang,Zhipeng Luo,Yingchen Yu,Kaiwen Cui,Shijian Lu.
CVPR 2022.
[paper] [code]
DETReg: Unsupervised Pretraining with Region Priors for Object Detection.
Amir Bar, Xin Wang, Vadim Kantorov, Colorado J Reed, Roei Herzig, Gal Chechik, Anna Rohrbach, Trevor Darrell, Amir Globerson.
CVPR 2022.
[paper] [code]
QueryDet: Cascaded Sparse Query for Accelerating High-Resolution Small Object Detection.
Chenhongyi Yang, Zehao Huang, Naiyan Wang.
CVPR 2022.
[paper] [code]
DAB-DETR: Dynamic Anchor Boxes are Better Queries for DETR.
Shilong Liu, Feng Li, Hao Zhang, Xiao Yang, Xianbiao Qi, Hang Su, Jun Zhu, Lei Zhang.
ICLR 2022.
[paper] [code] [detrex code]
ViDT: An Efficient and Effective Fully Transformer-based Object Detector.
Hwanjun Song, Deqing Sun, Sanghyuk Chun, Varun Jampani, Dongyoon Han, Byeongho Heo, Wonjae Kim, Ming-Hsuan Yang.
ICLR 2022.
[paper] [code]
CF-DETR: Coarse-to-Fine Transformers for End-to-End Object Detection.
Xipeng Cao, Peng Yuan, Bailan Feng, Kun Niu.
AAAI 2022.
[paper]
FP-DETR: Detection Transformer Advanced by Fully Pre-training.
Wen Wang, Yang Cao, Jing Zhang, Dacheng Tao.
ICLR 2022.
[paper]
D^2ETR: Decoder-Only DETR with Computationally Efficient Cross-Scale Attention.
Junyu Lin, Xiaofeng Mao, Yuefeng Chen, Lei Xu, Yuan He, Hui Xue
arxiv 2022.
[paper] [code]
Sparse DETR: Efficient End-to-End Object Detection with Learnable Sparsity.
Byungseok Roh, JaeWoong Shin, Wuhyun Shin, Saehoon Kim.
ICLR 2022.
[paper] [code]
Anchor DETR: Query Design for Transformer-Based Object Detection.
Yingming Wang, Xiangyu Zhang, Tong Yang, Jian Sun.
AAAI 2022.
[paper] [code]
Exploring Plain Vision Transformer Backbones for Object Detection.
Yanghao Li, Hanzi Mao, Ross Girshick, Kaiming He.
arXiv 2022.
[paper] [code]
[YOLOS] You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection.
Yuxin Fang*, Bencheng Liao*, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu.
NeurIPS 2021.
[paper] [code]
Dynamic DETR: End-to-End Object Detection With Dynamic Attention.
Xiyang Dai, Yinpeng Chen, Jianwei Yang, Pengchuan Zhang, Lu Yuan, Lei Zhang.
ICCV 2021.
[paper]
PnP-DETR: Towards Efficient Visual Analysis with Transformers.
Tao Wang, Li Yuan, Yunpeng Chen, Jiashi Feng, Shuicheng Yan.
ICCV 2021.
[paper]
[code]
WB-DETR: Transformer-Based Detector without Backbone.
Fanfan Liu, Haoran Wei, Wenzhe Zhao, Guozhen Li, Jingquan Peng, Zihao Li.
ICCV 2021.
[paper]
Conditional DETR for Fast Training Convergence.
Depu Meng*, Xiaokang Chen*, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang.
ICCV 2021.
[paper] [code] [detrex code]
Rethinking Transformer-based Set Prediction for Object Detection.
Zhiqing Sun, Shengcao Cao, Yiming Yang, Kris Kitani.
ICCV 2021.
[paper] [code]
Fast Convergence of DETR with Spatially Modulated Co-Attention.
Peng Gao, Minghang Zheng, Xiaogang Wang, Jifeng Dai, Hongsheng Li .
ICCV 2021.
[paper] [code]
Efficient DETR: Improving End-to-End Object Detector with Dense Prior.
Zhuyu Yao, Jiangbo Ai, Boxun Li, Chi Zhang.
arxiv 2021.
[paper]
UP-DETR: Unsupervised Pre-training for Object Detection with Transformers.
Zhigang Dai, Bolun Cai, Yugeng Lin, Junying Chen.
CVPR 2021.
[paper] [code]
Deformable DETR: Deformable Transformers for End-to-End Object Detection.
Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai.
ICLR 2021.
[paper] [code] [detrex code]
Open-Vocabulary DETR with Conditional Matching Yuhang.
Yuhang Zang, Wei Li, Kaiyang Zhou, Chen Huang, Chen Change Loy.
ECCV 2022.
[paper] [code]
OW-DETR: Open-world Detection Transformer.
Akshita Gupta, Sanath Narayan, K J Joseph, Salman Khan, Fahad Shahbaz Khan, Mubarak Shah.
CVPR 2022.
[paper] [code]
Simple Open-Vocabulary Object Detection with Vision Transformers.
Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, Neil Houlsby.
arxiv 2022.
[paper]
X-DETR: A Versatile Architecture for Instance-wise Vision-Language Tasks.
Zhaowei Cai, Gukyeong Kwon, Avinash Ravichandran, Erhan Bas, Zhuowen Tu, Rahul Bhotika, Stefano Soatto.
arxiv 2022.
[paper]
MDETR -- Modulated Detection for End-to-End Multi-Modal Understanding.
Aishwarya Kamath, Mannat Singh, Yann LeCun, Gabriel Synnaeve, Ishan Misra, Nicolas Carion.
ICCV 2021.
[paper] [code]
Class-agnostic Object Detection with Multi-modal Transformer.
Muhammad Maaz, Hanoona Rasheed, Salman Khan, Fahad Shahbaz Khan, Rao Muhammad Anwer and Ming-Hsuan Yang.
ECCV 2022.
[paper] [code]
[Object Centric OVD] -- Bridging the Gap between Object and Image-level Representations for Open-Vocabulary Detection.
Hanoona Rasheed, Muhammad Maaz, Muhammad Uzair Khattak, Salman Khan, Fahad Shahbaz Khan.
arXiv:2207.03482.
[paper] [code]
BEVFormer: Learning Bird's-Eye-View Representation from Multi-Camera Images via Spatiotemporal Transformers.
Zhiqi Li, Wenhai Wang, Hongyang Li, Enze Xie, Chonghao Sima, Tong Lu, Qiao Yu, Jifeng Dai.
ECCV 2022.
[paper] [code]
PETRv2: A Unified Framework for 3D Perception from Multi-Camera Images.
Yingfei Liu, Junjie Yan, Fan Jia, Shuailin Li, Qi Gao, Tiancai Wang, Xiangyu Zhang, Jian Sun.
arxiv 2022.
[paper] [code]
PETR: Position Embedding Transformation for Multi-View 3D Object Detection.
Yingfei Liu, Tiancai Wang, Xiangyu Zhang, Jian Sun.
ECCV 2022.
[paper] [code]
BEVSegFormer: Bird’s Eye View Semantic Segmentation From Arbitrary Camera Rigs.
Lang Peng, Zhirong Chen, Zhangjie Fu, Pengpeng Liang and Erkang Cheng.
arxiv 2022.
[paper]
CAT-Det: Contrastively Augmented Transformer for Multi-modal 3D Object Detection.
Yanan Zhang, Jiaxin Chen, Di Huang.
CVPR 2022.
[paper]
TransFusion: Robust LiDAR-Camera Fusion for 3D Object Detection with Transformers.
Xuyang Bai, Zeyu Hu, Xinge Zhu, Qingqiu Huang, Yilun Chen, Hongbo Fu, Chiew-Lan Tai.
CVPR 2022.
[paper] [code]
Omni-DETR: Omni-Supervised Object Detection with Transformers.
Pei Wang, Zhaowei Cai, Hao Yang, Gurumurthy Swaminathan, Nuno Vasconcelos, Bernt Schiele, Stefano Soatto.
CVPR 2022.
[paper]
MonoDETR: Depth-aware Transformer for Monocular 3D Object Detection.
Renrui Zhang, Han Qiu, Tai Wang, Xuanzhuo Xu, Ziyu Guo, Yu Qiao, Peng
Gao, Hongsheng Li.
CVPR 2022.
[paper] [code]
MonoDTR: Monocular 3D Object Detection with Depth-Aware Transformer.
Kuan-Chih Huang, Tsung-Han Wu, Hung-Ting Su, Winston H. Hsu.
CVPR 2022.
[paper] [code]
[VoxSeT] Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds.
Chenhang He, Ruihuang Li, Shuai Li, Lei Zhang.
CVPR 2022.
[paper] [code]
[SST] Embracing Single Stride 3D Object Detector with Sparse Transformer.
Lue Fan, Ziqi Pang, Tianyuan Zhang, Yu-Xiong Wang, Hang Zhao, Feng Wang, Naiyan Wang, Zhaoxiang Zhang.
CVPR 2022.
[paper] [code]
DETR3D: 3D Object Detection from Multi-view Images via 3D-to-2D Queries.
Yue Wang, Vitor Guizilini, Tianyuan Zhang, Yilun Wang, Hang Zhao, Justin Solomon.
CORL 2021.
[paper] [code]
[VOTR] Voxel Transformer for 3D object detection.
Jiageng Mao, Yujing Xue, Minzhe Niu, Haoyue Bai, Jiashi Feng, Xiaodan Liang, Hang Xu, Chunjing Xu.
ICCV 2021.
[paper] [code]
[SRDet] Suppress-and-Refine Framework for End-to-End 3D Object Detection.
Zili Liu, Guodong Xu, Honghui Yang, Minghao Chen, Kuoliang Wu, Zheng Yang, Haifeng Liu, Deng Cai.
arxiv 2021.
[paper] [code]
[3DETR] An End-to-End Transformer Model for 3D Object Detection.
Ishan Misra, Rohit Girdhar, Armand Joulin.
ICCV 2021.
[paper] [code]
[PointTransformer] Point Transformer.
Hengshuang Zhao, Li Jiang, Jiaya Jia, Philip Torr, Vladlen Koltun.
ICCV 2021.
[paper] [code]
[GroupFree3D] Group-Free 3D Object Detection via Transformers.
Ze Liu, Zheng Zhang, Yue Cao, Han Hu, Xin Tong.
ICCV 2021.
[paper] [code]
Mask DINO: Towards A Unified Transformer-based Framework for Object Detection and Segmentation.
Feng Li*, Hao Zhang*, Huaizhe xu, Shilong Liu, Lei Zhang, Lionel M. Ni, Heung-Yeung Shum.
arxiv.
[paper] [code]
[KMaX-DeepLab] k-means Mask Transformer.
Qihang Yu, Huiyu Wang, Siyuan Qiao, Maxwell Collins, Yukun Zhu, Hatwig Adam, Alan Yuille, Liang-Chieh Chen.
ECCV 2022.
[paper] [code]
[Mask2Former] Masked-attention Mask Transformer for Universal Image Segmentation .
Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar.
CVPR 2022.
[paper] [code]
[CMT-DeepLab] CMT-DeepLab: Clustering Mask Transformers for Panoptic Segmentation.
Qihang Yu, Huiyu Wang, Dahun Kim, Siyuan Qiao, Maxwell Collins, Yukun Zhu, Hartwig Adam, Alan Yuille, Liang-Chieh Chen.
CVPR 2022.
[paper]
[Panoptic SegFormer] Panoptic SegFormer: Delving Deeper into Panoptic Segmentation with
Transformers.
Zhiqi Li, Wenhai Wang, Enze Xie, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo, Tong Lu.
arxiv 2021.
[paper] [code]
[MaskFormer] Per-Pixel Classification is Not All You Need for Semantic Segmentation.
Bowen Cheng, Alexander G. Schwing, Alexander Kirillov.
NeurIPS 2021.
[paper] [code]
[SETR] Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers.
Zheng, Jiachen Lu, Hengshuang Zhao, Xiatian Zhu, Zekun Luo, Yabiao Wang, Yanwei Fu, Jianfeng Feng, Tao Xiang, Philip HS Torr, Li Zhang.
CVPR 2021.
[paper] [code]
[MaX-DeepLab] MaX-DeepLab: End-to-End Panoptic Segmentation with Mask Transformers.
Huiyu Wang, Yukun Zhu, Hartwig Adam, Alan Yuille, Liang-Chieh Chen.
CVPR 2021.
[paper] [code]
[SegFormer] SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers.
Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
NuerIPS 2021.
[paper] [code]
COCO Detection on Paperswithcode.
COCO Instance Segmentation on Paperswithcode.
COCO Panoptic Segmentation on Paperswithcode.
Semantic Segmentation on Paperswithcode.
3D Object Detection on Paperswithcode.
We thank all the authors above for their great works!