📘Documentation | 🛠️Installation | 👀Model Zoo | 🆕Update News | 🤔Reporting Issues
English | 简体中文
MMSegmentation is an open source semantic segmentation toolbox based on PyTorch. It is a part of the OpenMMLab project.
The master branch works with PyTorch 1.5+.
Major features
-
Unified Benchmark
We provide a unified benchmark toolbox for various semantic segmentation methods.
-
Modular Design
We decompose the semantic segmentation framework into different components and one can easily construct a customized semantic segmentation framework by combining different modules.
-
Support of multiple methods out of box
The toolbox directly supports popular and contemporary semantic segmentation frameworks, e.g. PSPNet, DeepLabV3, PSANet, DeepLabV3+, etc.
-
High efficiency
The training speed is faster than or comparable to other codebases.
v0.30.0 was released on 01/11/2023:
- Add 'Projects/' folder, and the first example project
- Support Delving into High-Quality Synthetic Face Occlusion Segmentation Datasets
Please refer to changelog.md for details and release history.
A brand new version of MMSegmentation v1.0.0rc3 was released in 12/31/2022:
- Unifies interfaces of all components based on MMEngine.
- Faster training and testing speed with complete support of mixed precision training.
- Refactored and more flexible architecture.
Find more new features in 1.x branch. Issues and PRs are welcome!
Please refer to get_started.md for installation and dataset_prepare.md for dataset preparation.
Please see train.md and inference.md for the basic usage of MMSegmentation. There are also tutorials for:
- customizing dataset
- designing data pipeline
- customizing modules
- customizing runtime
- training tricks
- useful tools
A Colab tutorial is also provided. You may preview the notebook here or directly run on Colab.
Results and models are available in the model zoo.
Supported backbones:
- ResNet (CVPR'2016)
- ResNeXt (CVPR'2017)
- HRNet (CVPR'2019)
- ResNeSt (ArXiv'2020)
- MobileNetV2 (CVPR'2018)
- MobileNetV3 (ICCV'2019)
- Vision Transformer (ICLR'2021)
- Swin Transformer (ICCV'2021)
- Twins (NeurIPS'2021)
- BEiT (ICLR'2022)
- ConvNeXt (CVPR'2022)
- MAE (CVPR'2022)
- PoolFormer (CVPR'2022)
Supported methods:
- FCN (CVPR'2015/TPAMI'2017)
- ERFNet (T-ITS'2017)
- UNet (MICCAI'2016/Nat. Methods'2019)
- PSPNet (CVPR'2017)
- DeepLabV3 (ArXiv'2017)
- BiSeNetV1 (ECCV'2018)
- PSANet (ECCV'2018)
- DeepLabV3+ (CVPR'2018)
- UPerNet (ECCV'2018)
- ICNet (ECCV'2018)
- NonLocal Net (CVPR'2018)
- EncNet (CVPR'2018)
- Semantic FPN (CVPR'2019)
- DANet (CVPR'2019)
- APCNet (CVPR'2019)
- EMANet (ICCV'2019)
- CCNet (ICCV'2019)
- DMNet (ICCV'2019)
- ANN (ICCV'2019)
- GCNet (ICCVW'2019/TPAMI'2020)
- FastFCN (ArXiv'2019)
- Fast-SCNN (ArXiv'2019)
- ISANet (ArXiv'2019/IJCV'2021)
- OCRNet (ECCV'2020)
- DNLNet (ECCV'2020)
- PointRend (CVPR'2020)
- CGNet (TIP'2020)
- BiSeNetV2 (IJCV'2021)
- STDC (CVPR'2021)
- SETR (CVPR'2021)
- DPT (ArXiv'2021)
- Segmenter (ICCV'2021)
- SegFormer (NeurIPS'2021)
- K-Net (NeurIPS'2021)
Supported datasets:
- Cityscapes
- PASCAL VOC
- ADE20K
- Pascal Context
- COCO-Stuff 10k
- COCO-Stuff 164k
- CHASE_DB1
- DRIVE
- HRF
- STARE
- Dark Zurich
- Nighttime Driving
- LoveDA
- Potsdam
- Vaihingen
- iSAID
- High quality synthetic face occlusion
Please refer to FAQ for frequently asked questions.
We appreciate all contributions to improve MMSegmentation. Please refer to CONTRIBUTING.md for the contributing guideline.
MMSegmentation is an open source project that welcome any contribution and feedback. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible as well as standardized toolkit to reimplement existing methods and develop their own new semantic segmentation methods.
If you find this project useful in your research, please consider cite:
@misc{mmseg2020,
title={{MMSegmentation}: OpenMMLab Semantic Segmentation Toolbox and Benchmark},
author={MMSegmentation Contributors},
howpublished = {\url{https://github.com/open-mmlab/mmsegmentation}},
year={2020}
}
MMSegmentation is released under the Apache 2.0 license, while some specific features in this library are with other licenses. Please refer to LICENSES.md for the careful check, if you are using our code for commercial matters.
- MMCV: OpenMMLab foundational library for computer vision.
- MIM: MIM installs OpenMMLab packages.
- MMClassification: OpenMMLab image classification toolbox and benchmark.
- MMDetection: OpenMMLab detection toolbox and benchmark.
- MMDetection3D: OpenMMLab's next-generation platform for general 3D object detection.
- MMYOLO: OpenMMLab YOLO series toolbox and benchmark.
- MMRotate: OpenMMLab rotated object detection toolbox and benchmark.
- MMSegmentation: OpenMMLab semantic segmentation toolbox and benchmark.
- MMOCR: OpenMMLab text detection, recognition, and understanding toolbox.
- MMPose: OpenMMLab pose estimation toolbox and benchmark.
- MMHuman3D: OpenMMLab 3D human parametric model toolbox and benchmark.
- MMSelfSup: OpenMMLab self-supervised learning toolbox and benchmark.
- MMRazor: OpenMMLab model compression toolbox and benchmark.
- MMFewShot: OpenMMLab fewshot learning toolbox and benchmark.
- MMAction2: OpenMMLab's next-generation action understanding toolbox and benchmark.
- MMTracking: OpenMMLab video perception toolbox and benchmark.
- MMFlow: OpenMMLab optical flow toolbox and benchmark.
- MMEditing: OpenMMLab image and video editing toolbox.
- MMGeneration: OpenMMLab image and video generative models toolbox.
- MMDeploy: OpenMMLab Model Deployment Framework.