/tkDNN

Deep neural network library and toolkit to do high performace inference on NVIDIA jetson platforms

Primary LanguageC++GNU General Public License v2.0GPL-2.0

tkDNN

tkDNN is a Deep Neural Network library built with cuDNN and tensorRT primitives, specifically thought to work on NVIDIA Jetson Boards. It has been tested on TK1(branch cudnn2), TX1, TX2, AGX Xavier and several discrete GPU. The main goal of this project is to exploit NVIDIA boards as much as possible to obtain the best inference performance. It does not allow training.

Accepted paper @ IRC 2020, will soon been published. M. Verucchi, L. Bartoli, F. Bagni, F. Gatti, P. Burgio and M. Bertogna, "Real-Time clustering and LiDAR-camera fusion on embedded platforms for self-driving cars", in proceedings in IEEE Robotic Computing (2020)

Index

Dependencies

This branch works on every NVIDIA GPU that supports the dependencies:

  • CUDA 10.0
  • CUDNN 7.603
  • TENSORRT 6.01
  • OPENCV 3.4
  • yaml-cpp 0.5.2 (sudo apt install libyaml-cpp-dev)

About OpenCV

To compile and install OpenCV4 with contrib us the script install_OpenCV4.sh. It will download and compile OpenCV in Download folder.

bash scripts/install_OpenCV4.sh

When using openCV not compiled with contrib, comment the definition of OPENCV_CUDACONTRIBCONTRIB in include/tkDNN/DetectionNN.h. When commented, the preprocessing of the networks is computed on the CPU, otherwise on the GPU. In the latter case some milliseconds are saved in the end-to-end latency.

How to compile this repo

Build with cmake. If using Ubuntu 18.04 a new version of cmake is needed (3.15 or above).

git clone https://github.com/ceccocats/tkDNN
cd tkDNN
mkdir build
cd build
cmake .. 
make

Workflow

Steps needed to do inference on tkDNN with a custom neural network.

  • Build and train a NN model with your favorite framework.
  • Export weights and bias for each layer and save them in a binary file (one for layer).
  • Export outputs for each layer and save them in a binary file (one for layer).
  • Create a new test and define the network, layer by layer using the weights extracted and the output to check the results.
  • Do inference.

How to export weights

Weights are essential for any network to run inference. For each test a folder organized as follow is needed (in the build folder):

    test_nn
        |---- layers/ (folder containing a binary file for each layer with the corresponding wieghts and bias)
        |---- debug/  (folder containing a binary file for each layer with the corresponding outputs)

Therefore, once the weights have been exported, the folders layers and debug should be placed in the corresponding test.

1)Export weights from darknet

To export weights for NNs that are defined in darknet framework, use this fork of darknet and follow these steps to obtain a correct debug and layers folder, ready for tkDNN.

git clone https://git.hipert.unimore.it/fgatti/darknet.git
cd darknet
make
mkdir layers debug
./darknet export <path-to-cfg-file> <path-to-weights> layers

N.b. Use compilation with CPU (leave GPU=0 in Makefile) if you also want debug.

2)Export weights for DLA34 and ResNet101

To get weights and outputs needed to run the tests dla34 and resnet101 use the Python script and the Anaconda environment included in the repository.

Create Anaconda environment and activate it:

conda env create -f file_name.yml
source activate env_name
python <script name>

3)Export weights for CenterNet

To get the weights needed to run Centernet tests use this fork of the original Centernet.

git clone https://github.com/sapienzadavide/CenterNet.git
  • follow the instruction in the README.md and INSTALL.md
python demo.py --input_res 512 --arch resdcn_101 ctdet --demo /path/to/image/or/folder/or/video/or/webcam --load_model ../models/ctdet_coco_resdcn101.pth --exp_wo --exp_wo_dim 512
python demo.py --input_res 512 --arch dla_34 ctdet --demo /path/to/image/or/folder/or/video/or/webcam --load_model ../models/ctdet_coco_dla_2x.pth --exp_wo --exp_wo_dim 512

4)Export weights for MobileNetSSD

To get the weights needed to run Mobilenet tests use this fork of a Pytorch implementation of SSD network.

git clone https://github.com/mive93/pytorch-ssd
cd pytorch-ssd
conda env create -f env_mobv2ssd.yml
python run_ssd_live_demo.py mb2-ssd-lite <pth-model-fil> <labels-file>

Run the demo

To run the an object detection demo follow these steps (example with yolov3):

rm yolo3_fp32.rt        # be sure to delete(or move) old tensorRT files
./test_yolo3            # run the yolo test (is slow)
./demo yolo3_fp32.rt ../demo/yolo_test.mp4 y

In general the demo program takes 4 parameters:

./demo <network-rt-file> <path-to-video> <kind-of-network> <number-of-classes>

where

  • <network-rt-file> is the rt file generated by a test
  • <<path-to-video> is the path to a video file or a camera input
  • <kind-of-network> is the type of network. Thee types are currently supported: y (YOLO family), c (CenterNet family) and m (MobileNet-SSD family)
  • <number-of-classes>is the number of classes the network is trained on N.b. By default it is used FP32 inference

demo

FP16 inference

To run the an object detection demo with FP16 inference follow these steps (example with yolov3):

export TKDNN_MODE=FP16  # set the half floating point optimization
rm yolo3_fp16.rt        # be sure to delete(or move) old tensorRT files
./test_yolo3            # run the yolo test (is slow)
./demo yolo3_fp16.rt ../demo/yolo_test.mp4 y

N.b. Using FP16 inference will lead to some errors in the results (first or second decimal).

INT8 inference

To run the an object detection demo with INT8 inference follow these steps (example with yolov3):

export TKDNN_MODE=INT8  # set the 8-bit integer optimization

# image_list.txt contains the list of the absolute paths to the calibration images
export TKDNN_CALIB_IMG_PATH=/path/to/calibration/image_list.txt

# label_list.txt contains the list of the absolute paths to the calibration labels
export TKDNN_CALIB_LABEL_PATH=/path/to/calibration/label_list.txt
rm yolo3_int8.rt        # be sure to delete(or move) old tensorRT files
./test_yolo3            # run the yolo test (is slow)
./demo yolo3_int8.rt ../demo/yolo_test.mp4 y

N.b. Using INT8 inference will lead to some errors in the results.

N.b. The test will be slower: this is due to the INT8 calibration, which may take some time to complete.

N.b. INT8 calibration requires TensorRT version greater than or equal to 6.0

BatchSize bigger than 1

export TKDNN_BATCHSIZE=2
# build tensorRT files

This will create a TensorRT file with the desidered max batch size. The test will still run with a batch of 1, but the created tensorRT can manage the desidered batch size.

Test batch Inference

This will test the network with random input and check if the output of each batch is the same.

./test_rtinference <network-rt-file> <number-of-batches>
# <number-of-batches> should be less or equal to the max batch size of the <network-rt-file>

# example
export TKDNN_BATCHSIZE=4           # set max batch size
rm yolo3_fp32.rt                   # be sure to delete(or move) old tensorRT files
./test_yolo3                       # build RT file
./test_rtinference yolo3_fp32.rt 4 # test with a batch size of 4

mAP demo

To compute mAP, precision, recall and f1score, run the map_demo.

A validation set is needed. To download COCO_val2017 (80 classes) run (form the root folder):

bash scripts/download_validation.sh COCO

To download Berkeley_val (10 classes) run (form the root folder):

bash scripts/download_validation.sh BDD

To compute the map, the following parameters are needed:

./map_demo <network rt> <network type [y|c|m]> <labels file path> <config file path>

where

  • <network rt>: rt file of a chosen network on which compute the mAP.
  • <network type [y|c|m]>: type of network. Right now only y(yolo), c(centernet) and m(mobilenet) are allowed
  • <labels file path>: path to a text file containing all the paths of the ground-truth labels. It is important that all the labels of the ground-truth are in a folder called 'labels'. In the folder containing the folder 'labels' there should be also a folder 'images', containing all the ground-truth images having the same same as the labels. To better understand, if there is a label path/to/labels/000001.txt there should be a corresponding image path/to/images/000001.jpg.
  • <config file path>: path to a yaml file with the parameters needed for the mAP computation, similar to demo/config.yaml

Example:

cd build
./map_demo dla34_cnet_FP32.rt c ../demo/COCO_val2017/all_labels.txt ../demo/config.yaml

Existing tests and supported networks

Test Name Network Dataset N Classes Input size Weights
yolo YOLO v21 COCO 2014 80 608x608 weights
yolo_224 YOLO v21 COCO 2014 80 224x224 weights
yolo_berkeley YOLO v21 BDD100K 10 416x736 weights
yolo_relu YOLO v2 (with ReLU, not Leaky)1 COCO 2014 80 416x416 weights
yolo_tiny YOLO v2 tiny1 COCO 2014 80 416x416 weights
yolo_voc YOLO v21 VOC 21 416x416 weights
yolo3 YOLO v32 COCO 2014 80 416x416 weights
yolo3_512 YOLO v32 COCO 2017 80 512x512 weights
yolo3_berkeley YOLO v32 BDD100K 10 320x544 weights
yolo3_coco4 YOLO v32 COCO 2014 4 416x416 weights
yolo3_flir YOLO v32 FREE FLIR 3 320x544 weights
yolo3_tiny YOLO v3 tiny2 COCO 2014 80 416x416 weights
yolo3_tiny512 YOLO v3 tiny2 COCO 2017 80 512x512 weights
dla34 Deep Leayer Aggreagtion (DLA) 343 COCO 2014 80 224x224 weights
dla34_cnet Centernet (DLA34 backend)4 COCO 2017 80 512x512 weights
mobilenetv2ssd Mobilnet v2 SSD Lite5 VOC 21 300x300 weights
mobilenetv2ssd512 Mobilnet v2 SSD Lite5 COCO 2017 81 512x512 weights
resnet101 Resnet 1016 COCO 2014 80 224x224 weights
resnet101_cnet Centernet (Resnet101 backend)4 COCO 2017 80 512x512 weights
csresnext50-panet-spp Cross Stage Partial Network 7 COCO 2014 80 416x416 weights
yolo4 Yolov4 8 COCO 2017 80 416x416 weights

References

  1. Redmon, Joseph, and Ali Farhadi. "YOLO9000: better, faster, stronger." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.
  2. Redmon, Joseph, and Ali Farhadi. "Yolov3: An incremental improvement." arXiv preprint arXiv:1804.02767 (2018).
  3. Yu, Fisher, et al. "Deep layer aggregation." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.
  4. Zhou, Xingyi, Dequan Wang, and Philipp Krähenbühl. "Objects as points." arXiv preprint arXiv:1904.07850 (2019).
  5. Sandler, Mark, et al. "Mobilenetv2: Inverted residuals and linear bottlenecks." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.
  6. He, Kaiming, et al. "Deep residual learning for image recognition." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
  7. Wang, Chien-Yao, et al. "CSPNet: A New Backbone that can Enhance Learning Capability of CNN." arXiv preprint arXiv:1911.11929 (2019).
  8. Bochkovskiy, Alexey, Chien-Yao Wang, and Hong-Yuan Mark Liao. "YOLOv4: Optimal Speed and Accuracy of Object Detection." arXiv preprint arXiv:2004.10934 (2020).