/mmdetection3d

OpenMMLab's next-generation platform for general 3D object detection.

Primary LanguagePythonApache License 2.0Apache-2.0

 
OpenMMLab website HOT      OpenMMLab platform TRY IT OUT
 

PyPI docs badge codecov license open issues issue resolution

๐Ÿ“˜Documentation | ๐Ÿ› ๏ธInstallation | ๐Ÿ‘€Model Zoo | ๐Ÿ†•Update News | ๐Ÿš€Ongoing Projects | ๐Ÿค”Reporting Issues

English | ็ฎ€ไฝ“ไธญๆ–‡

Introduction

MMDetection3D is an open source object detection toolbox based on PyTorch, towards the next-generation platform for general 3D detection. It is a part of the OpenMMLab project.

The main branch works with PyTorch 1.8+.

demo image

Major features
  • Support multi-modality/single-modality detectors out of box

    It directly supports multi-modality/single-modality detectors including MVXNet, VoteNet, PointPillars, etc.

  • Support indoor/outdoor 3D detection out of box

    It directly supports popular indoor and outdoor 3D detection datasets, including ScanNet, SUNRGB-D, Waymo, nuScenes, Lyft, and KITTI. For nuScenes dataset, we also support nuImages dataset.

  • Natural integration with 2D detection

    All the about 300+ models, methods of 40+ papers, and modules supported in MMDetection can be trained or used in this codebase.

  • High efficiency

    It trains faster than other codebases. The main results are as below. Details can be found in benchmark.md. We compare the number of samples trained per second (the higher, the better). The models that are not supported by other codebases are marked by โœ—.

    Methods MMDetection3D OpenPCDet votenet Det3D
    VoteNet 358 โœ— 77 โœ—
    PointPillars-car 141 โœ— โœ— 140
    PointPillars-3class 107 44 โœ— โœ—
    SECOND 40 30 โœ— โœ—
    Part-A2 17 14 โœ— โœ—

Like MMDetection and MMCV, MMDetection3D can also be used as a library to support different projects on top of it.

What's New

Highlight

We have renamed the branch 1.1 to main and switched the default branch from master to main. We encourage users to migrate to the latest version, though it comes with some cost. Please refer to Migration Guide for more details.

We have constructed a comprehensive LiDAR semantic segmentation benchmark on SemanticKITTI, including Cylinder3D, MinkUNet and SPVCNN methods. Noteworthy, the improved MinkUNetv2 can achieve 70.3 mIoU on the validation set of SemanticKITTI. We have also supported the training of BEVFusion and an occupancy prediction method, TPVFomrer, in our projects. More new features about 3D perception are on the way. Please stay tuned!

v1.2.0 was released in 4/7/2023

  • Support New Config Type in mmdet3d/config
  • Support the inference of DSVT in projects
  • Support downloading datasets from OpenDataLab using mim

v1.1.1 was released in 30/5/2023:

  • Support TPVFormer in projects
  • Support the training of BEVFusion in projects
  • Support lidar-based 3D semantic segmentation benchmark

Installation

Please refer to Installation for installation instructions.

Getting Started

For detailed user guides and advanced guides, please refer to our documentation:

User Guides
Advanced Guides

Overview of Benchmark and Model Zoo

Results and models are available in the model zoo.

Components
Backbones Heads Features
Architectures
LiDAR-based 3D Object Detection Camera-based 3D Object Detection Multi-modal 3D Object Detection 3D Semantic Segmentation
  • Outdoor
  • Indoor
  • Outdoor
  • Indoor
  • Outdoor
  • Indoor
  • Outdoor
  • Indoor
  • ResNet VoVNet Swin-T PointNet++ SECOND DGCNN RegNetX DLA MinkResNet Cylinder3D MinkUNet
    SECOND โœ— โœ— โœ— โœ— โœ“ โœ— โœ— โœ— โœ— โœ— โœ—
    PointPillars โœ— โœ— โœ— โœ— โœ“ โœ— โœ“ โœ— โœ— โœ— โœ—
    FreeAnchor โœ— โœ— โœ— โœ— โœ— โœ— โœ“ โœ— โœ— โœ— โœ—
    VoteNet โœ— โœ— โœ— โœ“ โœ— โœ— โœ— โœ— โœ— โœ— โœ—
    H3DNet โœ— โœ— โœ— โœ“ โœ— โœ— โœ— โœ— โœ— โœ— โœ—
    3DSSD โœ— โœ— โœ— โœ“ โœ— โœ— โœ— โœ— โœ— โœ— โœ—
    Part-A2 โœ— โœ— โœ— โœ— โœ“ โœ— โœ— โœ— โœ— โœ— โœ—
    MVXNet โœ“ โœ— โœ— โœ— โœ“ โœ— โœ— โœ— โœ— โœ— โœ—
    CenterPoint โœ— โœ— โœ— โœ— โœ“ โœ— โœ— โœ— โœ— โœ— โœ—
    SSN โœ— โœ— โœ— โœ— โœ— โœ— โœ“ โœ— โœ— โœ— โœ—
    ImVoteNet โœ“ โœ— โœ— โœ“ โœ— โœ— โœ— โœ— โœ— โœ— โœ—
    FCOS3D โœ“ โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ—
    PointNet++ โœ— โœ— โœ— โœ“ โœ— โœ— โœ— โœ— โœ— โœ— โœ—
    Group-Free-3D โœ— โœ— โœ— โœ“ โœ— โœ— โœ— โœ— โœ— โœ— โœ—
    ImVoxelNet โœ“ โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ—
    PAConv โœ— โœ— โœ— โœ“ โœ— โœ— โœ— โœ— โœ— โœ— โœ—
    DGCNN โœ— โœ— โœ— โœ— โœ— โœ“ โœ— โœ— โœ— โœ— โœ—
    SMOKE โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ“ โœ— โœ— โœ—
    PGD โœ“ โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ—
    MonoFlex โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ“ โœ— โœ— โœ—
    SA-SSD โœ— โœ— โœ— โœ— โœ“ โœ— โœ— โœ— โœ— โœ— โœ—
    FCAF3D โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ“ โœ— โœ—
    PV-RCNN โœ— โœ— โœ— โœ— โœ“ โœ— โœ— โœ— โœ— โœ— โœ—
    Cylinder3D โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ“ โœ—
    MinkUNet โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ“
    SPVCNN โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ“
    BEVFusion โœ— โœ— โœ“ โœ— โœ“ โœ— โœ— โœ— โœ— โœ— โœ—
    CenterFormer โœ— โœ— โœ— โœ— โœ“ โœ— โœ— โœ— โœ— โœ— โœ—
    TR3D โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ“ โœ— โœ—
    DETR3D โœ“ โœ“ โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ—
    PETR โœ— โœ“ โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ—
    TPVFormer โœ“ โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ— โœ—

    Note: All the about 500+ models, methods of 90+ papers in 2D detection supported by MMDetection can be trained or used in this codebase.

    FAQ

    Please refer to FAQ for frequently asked questions.

    Contributing

    We appreciate all contributions to improve MMDetection3D. Please refer to CONTRIBUTING.md for the contributing guideline.

    Acknowledgement

    MMDetection3D is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors as well as users who give valuable feedbacks. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new 3D detectors.

    Citation

    If you find this project useful in your research, please consider cite:

    @misc{mmdet3d2020,
        title={{MMDetection3D: OpenMMLab} next-generation platform for general {3D} object detection},
        author={MMDetection3D Contributors},
        howpublished = {\url{https://github.com/open-mmlab/mmdetection3d}},
        year={2020}
    }

    License

    This project is released under the Apache 2.0 license.

    Projects in OpenMMLab

    • MMEngine: OpenMMLab foundational library for training deep learning models.
    • MMCV: OpenMMLab foundational library for computer vision.
    • MMEval: A unified evaluation library for multiple machine learning libraries.
    • MIM: MIM installs OpenMMLab packages.
    • MMPreTrain: OpenMMLab pre-training toolbox and benchmark.
    • MMDetection: OpenMMLab detection toolbox and benchmark.
    • MMDetection3D: OpenMMLab's next-generation platform for general 3D object detection.
    • MMRotate: OpenMMLab rotated object detection toolbox and benchmark.
    • MMYOLO: OpenMMLab YOLO series toolbox and benchmark.
    • MMSegmentation: OpenMMLab semantic segmentation toolbox and benchmark.
    • MMOCR: OpenMMLab text detection, recognition, and understanding toolbox.
    • MMPose: OpenMMLab pose estimation toolbox and benchmark.
    • MMHuman3D: OpenMMLab 3D human parametric model toolbox and benchmark.
    • MMSelfSup: OpenMMLab self-supervised learning toolbox and benchmark.
    • MMRazor: OpenMMLab model compression toolbox and benchmark.
    • MMFewShot: OpenMMLab fewshot learning toolbox and benchmark.
    • MMAction2: OpenMMLab's next-generation action understanding toolbox and benchmark.
    • MMTracking: OpenMMLab video perception toolbox and benchmark.
    • MMFlow: OpenMMLab optical flow toolbox and benchmark.
    • MMagic: OpenMMLab Advanced, Generative and Intelligent Creation toolbox.
    • MMGeneration: OpenMMLab image and video generative models toolbox.
    • MMDeploy: OpenMMLab model deployment framework.