- Super fast and accurate 3D object detection based on LiDAR
- Fast training, fast inference
- An Anchor-free approach
- No Non-Max-Suppression
- Support distributed data parallel training
- Release pre-trained models
The technical details are described here
Update 2020.09.06: Add ROS
node. The great work has been done by @AhmedARadwan. The instructions for using the ROS code could be found here
cd ~/catkin_ws/src
git clone https://github.com/hyungikim4/SFA3D.git
cd SFA3D/
pip install .
cd ~/catkin_ws && catkin_make
Download the 3D KITTI detection dataset from here.
The downloaded data includes:
- Velodyne point clouds (29 GB)
- Training labels of object data set (5 MB)
- Camera calibration matrices of object data set (16 MB)
- Left color images of object data set (12 GB) (For visualization purpose only)
Please make sure that you construct the source code & dataset directories structure as below.
To visualize 3D point clouds with 3D boxes, let's execute:
cd sfa/data_process/
python kitti_dataset.py
The pre-trained model was pushed to this repo.
python test.py --gpu_idx 0 --peak_thresh 0.2
python demo_2_sides.py --gpu_idx 0 --peak_thresh 0.2
The data for the demonstration will be automatically downloaded by executing the above command.
python train.py --gpu_idx 0
- Single machine (node), multiple GPUs
python train.py --multiprocessing-distributed --world-size 1 --rank 0 --batch_size 64 --num_workers 8
-
Two machines (two nodes), multiple GPUs
- First machine
python train.py --dist-url 'tcp://IP_OF_NODE1:FREEPORT' --multiprocessing-distributed --world-size 2 --rank 0 --batch_size 64 --num_workers 8
- Second machine
python train.py --dist-url 'tcp://IP_OF_NODE2:FREEPORT' --multiprocessing-distributed --world-size 2 --rank 1 --batch_size 64 --num_workers 8
Since it is difficult to create a large amount of custom data, it is recommended to perform fine tuning with the custom dataset after training according to the desired config with the KITTI dataset.
cd ros/src/super_fast_object_detection/src/
Modify veloster_config_2sides.py
to fit your dataset.
Parameter | Description |
---|---|
boundary |
Front lidar x,y,z range. Range of z is dependent on the height of Lidar sensor. |
boundary_back |
Back lidar x,y,z range. |
BEV_WIDTH |
The width of BEV RGB map. |
BEV_HEIGHT |
The height of BEV RGB map. |
Tr_velo_to_cam |
Transform matrix(3x4) from Lidar coordinate to Image pixel coordinate. |
Save the Lidar raw pointcloud, front image, and BEV RGB map at 1 Hz from the rosbag in which the Lidar data and front image data are logged.
For easily labeling, BEV binary map of ground filtered pointcloud and BEV RGB map are saved.
roslaunch points_preprocessor_usi groundfilter.launch
python make_training_data_for_labeling_2sides_without_map.py
└── save_path
├── bev <-- BEV RGB map image
├── front_image <-- front RGB image
├── lidar <-- Lidar raw data (.npy)
└── label <-- label data (**TO DO**)
Auto-labeling using the trained model
python3 inference_label_2sides.py
Parameter | Type | Description |
---|---|---|
self.conf_thres |
double | Confidence threshold. Only objects with confidence higher than self.conf_thres are stored in the label text. |
configs.pretrained_path |
string | Pretrained model path |
data_root |
string | Root path of your dataset. |
have_train_txt |
bool | If it is true, inference only for data in train.txt |
Follow this guideline.
https://github.com/zexihan/labelImg-kitti
cd ~/catkin_ws/src/SFA3D/sfa/data_process
python write_train_txt.py
Visualize 3D point clouds with 3D boxes in Rviz.
cd ~/catkin_ws/src/SFA3D/sfa/data_process
python3 veloster_2sides_dataset.py
cd ~/catkin_ws/src/SFA3D
cp ros/src/super_fast_object_detection/src/veloster_config_2sides.py sfa/config/
Parameter | Type | Description |
---|---|---|
saved_fn |
string | Path where model will be saved |
pretrained_path |
string | Pretrained model path |
configs.input_size |
int | The image size of BEV RGB map. |
configs.hm_size |
int | configs.input_size / configs.down_ratio |
configs.num_classes |
int | The number of classes |
cd ~/catkin_ws/src/SFA3D/sfa
python3 train.py
source ~/catkin_ws/devel/setup.bash
rosrun super_fast_object_detection rosInference_2sides.py
- To track the training progress, go to the
logs/
folder and
cd logs/<saved_fn>/tensorboard/
tensorboard --logdir=./
- Then go to http://localhost:6006/
@misc{Super-Fast-Accurate-3D-Object-Detection-PyTorch,
author = {Nguyen Mau Dung},
title = {{Super-Fast-Accurate-3D-Object-Detection-PyTorch}},
howpublished = {\url{https://github.com/maudzung/Super-Fast-Accurate-3D-Object-Detection}},
year = {2020}
}
[1] CenterNet: Objects as Points paper, PyTorch Implementation
[2] RTM3D: PyTorch Implementation
[3] Libra_R-CNN: PyTorch Implementation
The YOLO-based models with the same BEV maps input:
[4] Complex-YOLO: v4, v3, v2
3D LiDAR Point pre-processing:
[5] VoxelNet: PyTorch Implementation
${ROOT}
└── checkpoints/
├── fpn_resnet_18/
├── fpn_resnet_18_epoch_300.pth
└── dataset/
└── kitti/
├──ImageSets/
│ ├── test.txt
│ ├── train.txt
│ └── val.txt
├── training/
│ ├── image_2/ (left color camera)
│ ├── calib/
│ ├── label_2/
│ └── velodyne/
└── testing/
│ ├── image_2/ (left color camera)
│ ├── calib/
│ └── velodyne/
└── classes_names.txt
└── sfa/
├── config/
│ ├── train_config.py
│ └── kitti_config.py
├── data_process/
│ ├── kitti_dataloader.py
│ ├── kitti_dataset.py
│ └── kitti_data_utils.py
├── models/
│ ├── fpn_resnet.py
│ ├── resnet.py
│ └── model_utils.py
└── utils/
│ ├── demo_utils.py
│ ├── evaluation_utils.py
│ ├── logger.py
│ ├── misc.py
│ ├── torch_utils.py
│ ├── train_utils.py
│ └── visualization_utils.py
├── demo_2_sides.py
├── demo_front.py
├── test.py
└── train.py
└── ros/
├── README.md
└── requirements.txt