# mmdetection3d **Repository Path**: open-mmlab/mmdetection3d ## Basic Information - **Project Name**: mmdetection3d - **Description**: 基于 PyTorch 和 MMCV 的通用 3D 感知算法库,支持室内外场景多个数据集的 3D 目标检测和 3D 点云分割,同时支持各种单模态和多模态算法,和 MMDetection 中各种 2D 检测算法模块的无缝衔接,为各种 3D 感知任务的算法研发提供了一套统一化、标准化和可复现的高性能基准。 - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 21 - **Forks**: 10 - **Created**: 2022-04-18 - **Last Updated**: 2024-10-28 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README
 
OpenMMLab website HOT      OpenMMLab platform TRY IT OUT
 
[![docs](https://img.shields.io/badge/docs-latest-blue)](https://mmdetection3d.readthedocs.io/en/latest/) [![badge](https://github.com/open-mmlab/mmdetection3d/workflows/build/badge.svg)](https://github.com/open-mmlab/mmdetection3d/actions) [![codecov](https://codecov.io/gh/open-mmlab/mmdetection3d/branch/master/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmdetection3d) [![license](https://img.shields.io/github/license/open-mmlab/mmdetection3d.svg)](https://github.com/open-mmlab/mmdetection3d/blob/master/LICENSE) **News**: ### 💎 Stable version **v1.0.0rc6** was released in 2/12/2022 ### 🌟 Preview of 1.1.x version A brand new version of **MMDetection v1.1.0rc0** was released in 1/9/2022: - Unifies interfaces of all components based on [MMEngine](https://github.com/open-mmlab/mmengine) and [MMDet 3.x](https://github.com/open-mmlab/mmdetection/tree/3.x). - A standard data protocol defines and unifies the common keys across different datasets. - Faster training and testing speed with more strong baselines. Find more new features in [1.1.x branch](https://github.com/open-mmlab/mmdetection3d/tree/1.1). Issues and PRs are welcome! The compatibilities of models are broken due to the unification and simplification of coordinate systems. For now, most models are benchmarked with similar performance, though few models are still being benchmarked. In this version, we update some of the model checkpoints after the refactor of coordinate systems. See more details in the [Changelog](docs/en/changelog.md). In the [nuScenes 3D detection challenge](https://www.nuscenes.org/object-detection?externalData=all&mapData=all&modalities=Any) of the 5th AI Driving Olympics in NeurIPS 2020, we obtained the best PKL award and the second runner-up by multi-modality entry, and the best vision-only results. Code and models for the best vision-only method, [FCOS3D](https://arxiv.org/abs/2104.10956), have been released. Please stay tuned for [MoCa](https://arxiv.org/abs/2012.12741). MMDeploy has supported some MMDetection3d model deployment. Documentation: https://mmdetection3d.readthedocs.io/ ## Introduction English | [简体中文](README_zh-CN.md) The master branch works with **PyTorch 1.3+**. MMDetection3D is an open source object detection toolbox based on PyTorch, towards the next-generation platform for general 3D detection. It is a part of the OpenMMLab project developed by [MMLab](http://mmlab.ie.cuhk.edu.hk/). ![demo image](resources/mmdet3d_outdoor_demo.gif) ### Major features - **Support multi-modality/single-modality detectors out of box** It directly supports multi-modality/single-modality detectors including MVXNet, VoteNet, PointPillars, etc. - **Support indoor/outdoor 3D detection out of box** It directly supports popular indoor and outdoor 3D detection datasets, including ScanNet, SUNRGB-D, Waymo, nuScenes, Lyft, and KITTI. For nuScenes dataset, we also support [nuImages dataset](https://github.com/open-mmlab/mmdetection3d/tree/master/configs/nuimages). - **Natural integration with 2D detection** All the about **300+ models, methods of 40+ papers**, and modules supported in [MMDetection](https://github.com/open-mmlab/mmdetection/blob/master/docs/en/model_zoo.md) can be trained or used in this codebase. - **High efficiency** It trains faster than other codebases. The main results are as below. Details can be found in [benchmark.md](./docs/en/benchmarks.md). We compare the number of samples trained per second (the higher, the better). The models that are not supported by other codebases are marked by `✗`. | Methods | MMDetection3D | [OpenPCDet](https://github.com/open-mmlab/OpenPCDet) | [votenet](https://github.com/facebookresearch/votenet) | [Det3D](https://github.com/poodarchu/Det3D) | | :-----------------: | :-----------: | :--------------------------------------------------: | :----------------------------------------------------: | :-----------------------------------------: | | VoteNet | 358 | ✗ | 77 | ✗ | | PointPillars-car | 141 | ✗ | ✗ | 140 | | PointPillars-3class | 107 | 44 | ✗ | ✗ | | SECOND | 40 | 30 | ✗ | ✗ | | Part-A2 | 17 | 14 | ✗ | ✗ | Like [MMDetection](https://github.com/open-mmlab/mmdetection) and [MMCV](https://github.com/open-mmlab/mmcv), MMDetection3D can also be used as a library to support different projects on top of it. ## License This project is released under the [Apache 2.0 license](LICENSE). ## Changelog **v1.0.0rc6** was released in 2/12/2022. Please refer to [changelog.md](docs/en/changelog.md) for details and release history. ## Benchmark and model zoo Results and models are available in the [model zoo](docs/en/model_zoo.md).
Components
Backbones Heads Features
Architectures
3D Object Detection Monocular 3D Object Detection Multi-modal 3D Object Detection 3D Semantic Segmentation
  • Outdoor
  • Indoor
  • Outdoor
  • Indoor
  • Outdoor
  • Indoor
  • Indoor
  • | | ResNet | PointNet++ | SECOND | DGCNN | RegNetX | DLA | MinkResNet | | :-----------: | :----: | :--------: | :----: | :---: | :-----: | :-: | :--------: | | SECOND | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | | PointPillars | ✗ | ✗ | ✓ | ✗ | ✓ | ✗ | ✗ | | FreeAnchor | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | | VoteNet | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | | H3DNet | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | | 3DSSD | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | | Part-A2 | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | | MVXNet | ✓ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | | CenterPoint | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | | SSN | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | | ImVoteNet | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | | FCOS3D | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | | PointNet++ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | | Group-Free-3D | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | | ImVoxelNet | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | | PAConv | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | | DGCNN | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | | SMOKE | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | | PGD | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | | MonoFlex | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | | SA-SSD | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | | FCAF3D | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | **Note:** All the about **300+ models, methods of 40+ papers** in 2D detection supported by [MMDetection](https://github.com/open-mmlab/mmdetection/blob/master/docs/en/model_zoo.md) can be trained or used in this codebase. ## Installation Please refer to [getting_started.md](docs/en/getting_started.md) for installation. ## Get Started Please see [getting_started.md](docs/en/getting_started.md) for the basic usage of MMDetection3D. We provide guidance for quick run [with existing dataset](docs/en/1_exist_data_model.md) and [with customized dataset](docs/en/2_new_data_model.md) for beginners. There are also tutorials for [learning configuration systems](docs/en/tutorials/config.md), [adding new dataset](docs/en/tutorials/customize_dataset.md), [designing data pipeline](docs/en/tutorials/data_pipeline.md), [customizing models](docs/en/tutorials/customize_models.md), [customizing runtime settings](docs/en/tutorials/customize_runtime.md) and [Waymo dataset](docs/en/datasets/waymo_det.md). Please refer to [FAQ](docs/en/faq.md) for frequently asked questions. When updating the version of MMDetection3D, please also check the [compatibility doc](docs/en/compatibility.md) to be aware of the BC-breaking updates introduced in each version. ## Model deployment Now MMDeploy has supported some MMDetection3D model deployment. Please refer to [model_deployment.md](docs/en/tutorials/model_deployment.md) for more details. ## Citation If you find this project useful in your research, please consider cite: ```latex @misc{mmdet3d2020, title={{MMDetection3D: OpenMMLab} next-generation platform for general {3D} object detection}, author={MMDetection3D Contributors}, howpublished = {\url{https://github.com/open-mmlab/mmdetection3d}}, year={2020} } ``` ## Contributing We appreciate all contributions to improve MMDetection3D. Please refer to [CONTRIBUTING.md](.github/CONTRIBUTING.md) for the contributing guideline. ## Acknowledgement MMDetection3D is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors as well as users who give valuable feedbacks. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new 3D detectors. ## Projects in OpenMMLab - [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab foundational library for computer vision. - [MIM](https://github.com/open-mmlab/mim): MIM installs OpenMMLab packages. - [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab image classification toolbox and benchmark. - [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab detection toolbox and benchmark. - [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab's next-generation platform for general 3D object detection. - [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab rotated object detection toolbox and benchmark. - [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab semantic segmentation toolbox and benchmark. - [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab text detection, recognition, and understanding toolbox. - [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab pose estimation toolbox and benchmark. - [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 3D human parametric model toolbox and benchmark. - [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab self-supervised learning toolbox and benchmark. - [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab model compression toolbox and benchmark. - [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab fewshot learning toolbox and benchmark. - [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab's next-generation action understanding toolbox and benchmark. - [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark. - [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark. - [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab image and video editing toolbox. - [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab image and video generative models toolbox. - [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab model deployment framework.