Super Fast and Accurate 3D Object Detection based on LiDAR
Features
- Super fast and accurate 3D object detection based on LiDAR
- Fast training, fast inference
- An Anchor-free approach
- No Non-Max-Suppression
- Support distributed data parallel training
- Release pre-trained models
Technical details could be found here
Demonstration
2. Getting Started
2.1. Requirement
pip install -U -r requirements.txt
2.2. Data Preparation
Download the 3D KITTI detection dataset from here.
The downloaded data includes:
- Velodyne point clouds (29 GB)
- Training labels of object data set (5 MB)
- Camera calibration matrices of object data set (16 MB)
- Left color images of object data set (12 GB) (For visualization purpose only)
Please make sure that you construct the source code & dataset directories structure as below.
2.3. How to run
2.3.1. Visualize the dataset
To visualize 3D point clouds with 3D boxes, let's execute:
cd src/data_process
python kitti_dataset.py
2.3.2. Inference
The pre-trained model was pushed to this repo.
python test.py --gpu_idx 0 --peak_thresh 0.2
2.3.2. Making demonstration
python demo_2_sides.py --gpu_idx 0 --peak_thresh 0.2
The data for the demostration will be automatically downloaded by executing the above command.
2.3.3. Training
2.3.3.1. Single machine, single gpu
python train.py --gpu_idx 0
Tensorboard
- To track the training progress, go to the
logs/
folder and
cd logs/<saved_fn>/tensorboard/
tensorboard --logdir=./
- Then go to http://localhost:6006/
Contact
If you think this work is useful, please give me a star!
If you find any errors or have any suggestions, please contact me (Email: nguyenmaudung93.kstn@gmail.com
).
Thank you!
Citation
@misc{Super-Fast-Accurate-3D-Object-Detection-PyTorch,
author = {Nguyen Mau Dung},
title = {{Super-Fast-Accurate-3D-Object-Detection-PyTorch}},
howpublished = {\url{https://github.com/maudzung/Super-Fast-Accurate-3D-Object-Detection}},
year = {2020}
}
References
[1] CenterNet: Objects as Points paper, PyTorch Implementation
[2] RTM3D: PyTorch Implementation
Folder structure
${ROOT}
└── checkpoints/
├── fpn_resnet_18/
├── fpn_resnet_18_epoch_300.pth
└── dataset/
└── kitti/
├──ImageSets/
│ ├── test.txt
│ ├── train.txt
│ └── val.txt
├── training/
│ ├── image_2/ (left color camera)
│ ├── calib/
│ ├── label_2/
│ └── velodyne/
└── testing/
│ ├── image_2/ (left color camera)
│ ├── calib/
│ └── velodyne/
└── classes_names.txt
└── src/
├── config/
│ ├── train_config.py
│ └── kitti_config.py
├── data_process/
│ ├── kitti_dataloader.py
│ ├── kitti_dataset.py
│ └── kitti_data_utils.py
├── models/
│ ├── fpn_resnet.py
│ ├── resnet.py
│ └── model_utils.py
└── utils/
│ ├── demo_utils.py
│ ├── evaluation_utils.py
│ ├── logger.py
│ ├── misc.py
│ ├── torch_utils.py
│ ├── train_utils.py
│ └── visualization_utils.py
├── demo_2_sides.py
├── demo_front.py
├── test.py
└── train.py
├── README.md
└── requirements.txt