This repository contains simple usage explanations of how the RangeNet++ inference works with the TensorRT and C++ interface.
Developed by Xieyuanli Chen, Andres Milioto and Jens Behley.
Modified by Tongda Yang for tensorrt8.
My environment is:
Tensorrt = 8.0
CUDA drive = 11.2
CUDA running time = 11.1
cudnn = 8.2.1.32
ubuntu = 20.04
First you need prepare your nvidia driver and CUDA:
CUDA package on cdn: Link
cudnn package on cdn: Link
Then you need to install tensorrt8:
(We strongly recommend your install as following steps using tensorrt.tar)
- Install tar package fit your environment
Official install link: Link
TensorRT GA is general availability.
TensorRT EA stands for early access.
We recommend you install GA version as the instruction of official tutorial.
-
Extract
tar zxf TensorRT-8.0.1.6.Linux.x86_64-gnu.cuda-11.3.cudnn8.2.tar.gz
move to the path you want to set
mv TensorRT-8.0.1.6 /path/you/want/to/set
-
Set environment variable in ~/.bashrc
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path/you/want/to/set/TensorRT-8.0.1.6/lib export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path/you/want/to/set/TensorRT-8.0.1.6/targets/x86_64-linux-gnu/lib
Remember to source your file
source ~/.bashrc
-
Copy the lib and include to system
sudo cp -r ./lib/* /usr/lib sudo cp -r ./include/* /usr/include
-
Test your tensorrt using official example
example in TensorRT-8.0.1.6/samples/sampleMNIST
make
You can do the other dependencies:
$ sudo apt-get update
$ sudo apt-get install -yqq build-essential python3-dev python3-pip apt-utils git cmake libboost-all-dev libyaml-cpp-dev libopencv-dev
Then install the Python packages needed:
$ sudo apt install python-empy
$ sudo pip install catkin_tools trollius numpy
We use the catkin tool to build the library.
$ mkdir -p ~/catkin_ws/src
$ cd ~/catkin_ws/src
Download this file in src directory
cd ~/catkin_ws
catkin init
catkin build rangenent_lib
You need to install the pre-trained darknet model. Link
A single LiDAR scan for running the demo, you could find in the example folder example/000000.bin
. For more LiDAR data, you could download from KITTI odometry dataset.
To infer a single LiDAR scan and visualize the semantic point cloud:
# go to the root path of the catkin workspace
$ cd ~/catkin_ws
# use --verbose or -v to get verbose mode
$ ./devel/lib/rangenet_lib/infer -h # help
$ ./devel/lib/rangenet_lib/infer -p /path/to/the/pretrained/model -s /path/to/the/scan.bin --verbose
Notice: for the first time running, it will take several minutes to generate a .trt
model for C++ interface.