PointPillars Pytorch Model Convert To ONNX, And Using TensorRT to Load this IR(ONNX) for Fast Speeding Inference
Welcome to PointPillars(This is origin from nuTonomy/second.pytorch ReadMe.txt).
This repo demonstrates how to reproduce the results from PointPillars: Fast Encoders for Object Detection from Point Clouds (to be published at CVPR 2019) on the KITTI dataset by making the minimum required changes from the preexisting open source codebase SECOND.
Meanwhile, This part of the code also refers to the open source k0suke-murakami (https://github.com/k0suke-murakami/train_point_pillars) this code.
This is not an official nuTonomy codebase, but it can be used to match the published PointPillars results.
WARNING: This code is not being actively maintained. This code can be used to reproduce the results in the first version of the paper, https://arxiv.org/abs/1812.05784v1. For an actively maintained repository that can also reproduce PointPillars results on nuScenes, we recommend using SECOND. We are not the owners of the repository, but we have worked with the author and endorse his code.
This is a fork of SECOND for KITTI object detection and the relevant subset of the original README is reproduced here.
If you do not waste time on pointpillars envs, please pull my docker virtual environments :
docker pull smallmunich/suke_pointpillars:v1
Attention: when you launch this docker envs, please run this command :
conda activate pointpillars
And Then, you can run train or evaluation or onnx model generate command line.
git clone https://github.com/SmallMunich/nutonomy_pointpillars.git
It is recommend to use the Anaconda package manager.
First, use Anaconda to configure as many packages as possible.
conda create -n pointpillars python=3.6 anaconda
source activate pointpillars
conda install shapely pybind11 protobuf scikit-image numba pillow
conda install pytorch torchvision -c pytorch
conda install google-sparsehash -c bioconda
Then use pip for the packages missing from Anaconda.
pip install --upgrade pip
pip install fire tensorboardX
Finally, install SparseConvNet. This is not required for PointPillars, but the general SECOND code base expects this to be correctly configured. However, I suggest you install the spconv instead of SparseConvNet.
git clone git@github.com:facebookresearch/SparseConvNet.git
cd SparseConvNet/
bash build.sh
# NOTE: if bash build.sh fails, try bash develop.sh instead
Additionally, you may need to install Boost geometry:
sudo apt-get install libboost-all-dev
You need to add following environment variables for numba to ~/.bashrc:
export NUMBAPRO_CUDA_DRIVER=/usr/lib/x86_64-linux-gnu/libcuda.so
export NUMBAPRO_NVVM=/usr/local/cuda/nvvm/lib64/libnvvm.so
export NUMBAPRO_LIBDEVICE=/usr/local/cuda/nvvm/libdevice
Add nutonomy_pointpillars/ to your PYTHONPATH.
export PYTHONPATH=$PYTHONPATH:/your_root_path/nutonomy_pointpillars/
Download KITTI dataset and create some directories first:
└── KITTI_DATASET_ROOT
├── training <-- 7481 train data
| ├── image_2 <-- for visualization
| ├── calib
| ├── label_2
| ├── velodyne
| └── velodyne_reduced <-- empty directory
└── testing <-- 7580 test data
├── image_2 <-- for visualization
├── calib
├── velodyne
└── velodyne_reduced <-- empty directory
Note: PointPillar's protos use KITTI_DATASET_ROOT=/data/sets/kitti_second/
.
python create_data.py create_kitti_info_file --data_path=KITTI_DATASET_ROOT
python create_data.py create_reduced_point_cloud --data_path=KITTI_DATASET_ROOT
python create_data.py create_groundtruth_database --data_path=KITTI_DATASET_ROOT
The config file needs to be edited to point to the above datasets:
train_input_reader: {
...
database_sampler {
database_info_path: "/path/to/kitti_dbinfos_train.pkl"
...
}
kitti_info_path: "/path/to/kitti_infos_train.pkl"
kitti_root_path: "KITTI_DATASET_ROOT"
}
...
eval_input_reader: {
...
kitti_info_path: "/path/to/kitti_infos_val.pkl"
kitti_root_path: "KITTI_DATASET_ROOT"
}
cd ~/second.pytorch/second
python ./pytorch/train.py train --config_path=./configs/pointpillars/car/xyres_16.proto --model_dir=/path/to/model_dir
- If you want to train a new model, make sure "/path/to/model_dir" doesn't exist.
- If "/path/to/model_dir" does exist, training will be resumed from the last checkpoint.
- Training only supports a single GPU.
- Training uses a batchsize=2 which should fit in memory on most standard GPUs.
- On a single 1080Ti, training xyres_16 requires approximately 20 hours for 160 epochs.
cd ~/second.pytorch/second/
python pytorch/train.py evaluate --config_path= configs/pointpillars/car/xyres_16.proto --model_dir=/path/to/model_dir
- Detection result will saved in model_dir/eval_results/step_xxx.
- By default, results are stored as a result.pkl file. To save as official KITTI label format use --pickle_result=False.
this python file is : second/pyotrch/models/voxelnet.py
voxel_features = self.voxel_feature_extractor(pillar_x, pillar_y, pillar_z, pillar_i,
num_points, x_sub_shaped, y_sub_shaped, mask)
###################################################################################
# return voxel_features ### onnx voxel_features export
# middle_feature_extractor for trim shape
voxel_features = voxel_features.squeeze()
voxel_features = voxel_features.permute(1, 0)
UNCOMMENT this line: return voxel_features
And Then, you can run convert IR command.
cd ~/second.pytorch/second/
python pytorch/train.py onnx_model_generate --config_path= configs/pointpillars/car/xyres_16.proto --model_dir=/path/to/model_dir
-
If you want to check this convert model about pfe.onnx and rpn.onnx model, please refer to this py-file: check_onnx_valid.py
-
Now, we can compare onnx results with pytorch origin model predicts as follows :
-
the pfe.onnx and rpn.onnx predicts file is located: "second/pytorch/onnx_predict_outputs", you can see it carefully.
eval_voxel_features.txt
eval_voxel_features_onnx.txt
eval_rpn_features.txt
eval_rpn_onnx_features.txt
- First you needs this environments(onnx_tensorrt envs):
docker pull smallmunich/onnx_tensorrt:latest
-
If you want to use pfe.onnx and rpn.onnx model for tensorrt inference, please refer to this py-file: tensorrt_onnx_infer.py
-
Now, we can compare onnx results with pytorch origin model predicts as follows :
-
the pfe.onnx and rpn.onnx predicts file is located: "second/pytorch/onnx_predict_outputs", you can see it carefully.
pfe_rpn_onnx_outputs.txt
pfe_tensorrt_outputs.txt
rpn_onnx_outputs.txt
rpn_tensorrt_outputs.txt
- More Details will be update on my chinese blog:
- export from pytorch to onnx IR blog : https://blog.csdn.net/Small_Munich/article/details/101559424
- onnx compare blog : https://blog.csdn.net/Small_Munich/article/details/102073540
- tensorrt compare blog : https://blog.csdn.net/Small_Munich/article/details/102489147
- wait for update & best wishes.