This repo implements the paper FUTR3D: A Unified Sensor Fusion Framework for 3D Detection - project page
We built our implementation upon MMdetection3D. The major part of the code is in the directory plugin/futr3d.
- mmcv
- mmdetection
- mmdetection3d==0.17.3
- nuscenes-devkit
For cameras with Radar setting, you should generate a meta file or say .pkl file including Radar infos.
python3 tools/data_converter/nusc_radar.py
For others, please follow the mmdet3d to process the data.
For example, to train FUTR3D with LiDAR only on 8 GPUs, please use
bash tools/dist_train.sh plugin/futr3d/configs/lidar_only/01voxel_q6_step_38e.py 8
We will release out checkpoints in the next few days!
models | mAP | NDS |
---|---|---|
Res101 + 32 beam VoxelNet | 64.2 | 68.0 |
Res101 + 4 beam VoxelNet | 54.9 | 61.5 |
Res101 + 1 beam VoxelNet | 41.3 | 50.0 |
models | mAP | NDS |
---|---|---|
Res101 + Radar | 35.0 | 45.9 |
models | mAP | NDS |
---|---|---|
32 beam VoxelNet | 59.3 | 65.5 |
4 beam VoxelNet | 42.1 | 54.8 |
1 beam VoxelNet | 16.4 | 37.9 |
models | mAP | NDS |
---|---|---|
Res101 | 34.6 | 42.5 |
For the implementation, we rely heavily on MMCV, MMDetection, MMDetection3D, and DETR3D
- DETR3D: 3D Object Detection from Multi-view Images via 3D-to-2D Queries
- MUTR3D: A Multi-camera Tracking Framework via 3D-to-2D Queries
- For more projects on Autonomous Driving, check out our Visual-Centric Autonomous Driving (VCAD) project page webpage
@article{chen2022futr3d,
title={FUTR3D: A Unified Sensor Fusion Framework for 3D Detection},
author={Chen, Xuanyao and Zhang, Tianyuan and Wang, Yue and Wang, Yilun and Zhao, Hang},
journal={arXiv preprint arXiv:2203.10642},
year={2022}
}
Contact: Xuanyao Chen at: xuanyaochen18@fudan.edu.cn
or ixyaochen@gmail.com