/futr3d

Code for paper: FUTR3D: a unified sensor fusion framework for 3d detection

Primary LanguagePythonMIT LicenseMIT

FUTR3D: A Unified Sensor Fusion Framework for 3D Detection

This repo implements the paper FUTR3D: A Unified Sensor Fusion Framework for 3D Detection - project page

We built our implementation upon MMdetection3D. The major part of the code is in the directory plugin/futr3d.

Environment

Prerequisite

  1. mmcv
  2. mmdetection
  3. mmdetection3d==0.17.3
  4. nuscenes-devkit

Data

For cameras with Radar setting, you should generate a meta file or say .pkl file including Radar infos.

python3 tools/data_converter/nusc_radar.py

For others, please follow the mmdet3d to process the data.

Train

For example, to train FUTR3D with LiDAR only on 8 GPUs, please use

bash tools/dist_train.sh plugin/futr3d/configs/lidar_only/01voxel_q6_step_38e.py 8

Results

We will release out checkpoints in the next few days!

LiDAR & Cam

models mAP NDS
Res101 + 32 beam VoxelNet 64.2 68.0
Res101 + 4 beam VoxelNet 54.9 61.5
Res101 + 1 beam VoxelNet 41.3 50.0

Cam & Radar

models mAP NDS
Res101 + Radar 35.0 45.9

LiDAR only

models mAP NDS
32 beam VoxelNet 59.3 65.5
4 beam VoxelNet 42.1 54.8
1 beam VoxelNet 16.4 37.9

Cam only

models mAP NDS
Res101 34.6 42.5

Acknowledgment

For the implementation, we rely heavily on MMCV, MMDetection, MMDetection3D, and DETR3D

Related projects

  1. DETR3D: 3D Object Detection from Multi-view Images via 3D-to-2D Queries
  2. MUTR3D: A Multi-camera Tracking Framework via 3D-to-2D Queries
  3. For more projects on Autonomous Driving, check out our Visual-Centric Autonomous Driving (VCAD) project page webpage

Reference

@article{chen2022futr3d,
  title={FUTR3D: A Unified Sensor Fusion Framework for 3D Detection},
  author={Chen, Xuanyao and Zhang, Tianyuan and Wang, Yue and Wang, Yilun and Zhao, Hang},
  journal={arXiv preprint arXiv:2203.10642},
  year={2022}
}

Contact: Xuanyao Chen at: xuanyaochen18@fudan.edu.cn or ixyaochen@gmail.com