Awesome-Real-time-Semantic-Segmentation

实时语义分割(Real-time Semantic Segmentation)

实时语义分割方法(Real-time Semantic Segmentation Methods)

单分支网络(Single-branch Network)

方法 标题 论文 代码 发表期刊或会议 基础框架 应用场景 发表年份 数据集
ENet ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation Paper Code Arxiv CNN Desk/Mobile GPU 2016 Cityscapes, CamVid, SUN
DABNet DABNet: Depth-wise Asymmetric Bottleneck for Real-time Semantic Segmentation Paper Code BMVC CNN Desk GPU 2019 Cityscapes, CamVid
SegFormer SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers Paper Code NeurIPS Transformer Desk GPU 2021 Cityscapes, ADE20K, PascalContext, PascalVOC, COCO-Stuff, iSAID
SegNext SegNeXt: Rethinking Convolutional Attention Design for Semantic Segmentation Paper Code NeurIPS CNN Desk GPU 2022 Cityscapes, ADE20K, PascalContext, PascalVOC, COCO-Stuff, iSAID
AFFormer Head-Free Lightweight Semantic Segmentation with Linear Transformer Paper Code AAAI Transformer Desk GPU 2023 Cityscapes, ADE20K, COCO-Stuff

双分支网络(Two-branch Network)

方法 标题 论文 代码 发表期刊或会议 基础框架 应用场景 发表年份 数据集
BiSeNet BiSeNet: Bilateral Segmentation Network for Real-time Semantic Segmentation Paper Code ECCV CNN Desk GPU 2018 Cityscapes, CamVid, COCO-Stuff
Fast-SCNN Fast-SCNN: Fast Semantic Segmentation Network Paper Code BMVC CNN Desk GPU 2019 Cityscapes
BiSeNetV2 BiSeNet V2: Bilateral Network with Guided Aggregation for Real-time Semantic Segmentation Paper Code IJCV CNN Desk GPU 2021 Cityscapes, CamVid, COCO-Stuff
STDC Rethinking BiSeNet For Real-time Semantic Segmentation Paper Code CVPR CNN Desk GPU 2021 Cityscapes, CamVid
DDRNet Deep Dual-resolution Networks for Real-time and Accurate Semantic Segmentation of Traffic Scenes Paper Code T-ITS CNN Desk GPU 2022 Cityscapes, CamVid, COCO-Stuff
RTFormer Rethinking BiSeNet For Real-time Semantic Segmentation Paper Code NeurIPS CNN Desk GPU 2022 Cityscapes, CamVid, ADE20K, COCO-Stuff
SeaFormer SEAFORMER: SQUEEZE-ENHANCED AXIAL TRANSFORMER FOR MOBILE SEMANTIC SEGMENTATION Paper Code ICLR Hybrid Mobile CPU 2023 Cityscapes, ADE20K, PascalContext, COCO-Stuff

多分支网络(Multi Branch Network)

方法 标题 论文 代码 发表期刊或会议 基础框架 应用场景 发表年份 数据集
ICNet ICNet for Real-Time Semantic Segmentation on High-Resolution Images Paper Code ECCV CNN Desk GPU 2018 Cityscapes, CamVid, COCO-Stuff
ESPNet ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation Paper Code ECCV CNN Desk/Mobile GPU 2018 Cityscapes, PascalVOC, Mapillary
DFANet DFANet: Deep Feature Aggregation for Real-Time Semantic Segmentation Paper CVPR CNN Desk GPU 2019 Cityscapes, CamVid
PIDNet PIDNet: A Real-time Semantic Segmentation Network Inspired by PID Controllers Paper Code CVPR CNN Desk GPU 2023 Cityscapes, CamVid,PASCAL Context

U型网络(U-shape Network)

方法 标题 论文 代码 发表期刊或会议 基础框架 应用场景 发表年份 数据集
SwiftNet In Defense of Pre-trained ImageNet Architectures for Real-time Semantic Segmentation of Road-driving Images Paper Code CVPR CNN Desk/Mobile GPU 2019 Cityscapes, CamVid
ShelfNet ShelfNet for Fast Semantic Segmentation Paper Code ICCV(Workshop) CNN Desk GPU 2019 Cityscapes, PascalContext, PascalVOC
SFNet Semantic Flow for Fast and Accurate Scene Parsing Paper Code ECCV CNN Desk GPU 2020 Cityscapes, CamVid, ADE20K, PascalContext
HyperSeg HyperSeg: Patch-wise Hypernetwork for Real-time Semantic Segmentation Paper Code CVPR CNN Desk GPU 2021 Cityscapes, CamVid, PascalVOC
TopFormer TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentation Paper Code CVPR Hybrid Mobile CPU 2022 Cityscapes, ADE20K, PascalContext, COCO-Stuff

NAS网络(NAS Network)

方法 标题 论文 代码 发表期刊或会议 基础框架 应用场景 发表年份 数据集
DF-Seg Partial Order Pruning: for Best Speed/Accuracy Trade-off in Neural Architecture Search Paper CVPR CNN Desk/Mobile GPU 2019 Cityscapes
FasterSeg FASTERSEG: SEARCHING FOR FASTER REAL-TIME SEMANTIC SEGMENTATION Paper Code ICLR CNN Desk GPU 2020 Cityscapes, CamVid, BDD
RT-Seg Towards Real-Time Segmentation on the Edge Paper AAAI CNN Mobile GPU 2023 Cityscapes, ADE20K, PascalVOC
Pruning Parameterization Pruning Parameterization with Bi-level Optimization for Efficient Semantic Segmentation on the Edge Paper CVPR CNN Mobile GPU 2023 Cityscapes, ADE20K, PascalVOC

数据集(Dataset)

数据集 链接 年份 类别数 分辨率 图像数量 类型
Cityscapes Link 2012 19 2048x1024 5000 自动驾驶
CamVid Link 2009 11 960x720 701 自动驾驶
ADE20K Link 2017 150 不固定 22210 通用场景
COCOStuff-10K Link 2018 171 不固定 10000 通用场景
PASCAL VOC 2012 Link 2012 20 不固定 13487 通用场景
PascalContext Link 2015 59 不固定 10103 通用场景

测速代码(Latency Measurement Code)

Link

Citation

如果我们的总结对你有所帮助, 请引用以下论文:

@article{Gao2024Survey,
  title={Deep learning-based real-time semantic segmentation: a survey},
  author={Gao, Changxin and Xu, Zhengze and Wu, Dongyue and Yu, Changqian and Sang, Nong},  
  journal={Journal of Image and Graphics}
  volume={29},
  number={5},
  pages={1119--1145},
  year={2024}
}

@article{高常鑫2024深度学习实时语义分割综述,
  title={深度学习实时语义分割综述},
  author={高常鑫 and 徐正泽 and 吴东岳 and 余昌黔 and 桑农},
  journal={**图象图形学报},
  volume={29},
  number={5},
  pages={1119--1145},
  year={2024}
}

@inproceedings{xu2024sctnet,
  title={SCTNet: Single-Branch CNN with Transformer Semantic Information for Real-Time Segmentation},
  author={Xu, Zhengze and Wu, Dongyue and Yu, Changqian and Chu, Xiangxiang and Sang, Nong and Gao, Changxin},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={38},
  number={6},
  pages={6378--6386},
  year={2024}
}

@inproceedings{yu2018bisenet,
  title={Bisenet: Bilateral segmentation network for real-time semantic segmentation},
  author={Yu, Changqian and Wang, Jingbo and Peng, Chao and Gao, Changxin and Yu, Gang and Sang, Nong},
  booktitle={Proceedings of the European conference on computer vision (ECCV)},
  pages={325--341},
  year={2018}
}

@article{yu2021bisenet,
  title={Bisenet v2: Bilateral network with guided aggregation for real-time semantic segmentation},
  author={Yu, Changqian and Gao, Changxin and Wang, Jingbo and Yu, Gang and Shen, Chunhua and Sang, Nong},
  journal={International Journal of Computer Vision},
  volume={129},
  pages={3051--3068},
  year={2021},
  publisher={Springer}
}

**如果有任何问题请联系:zhengzexu@hust.edu.cn**