/yolo-tensorrt

darknet -> tensorrt. YoloV4 YoloV3 use raw darknet *.weights and *.cfg fils. If the wrapper is useful to you,please Star it.

Primary LanguageC++MIT LicenseMIT

Yolov3 Yolov4 TensorRT Implementation

GitHub stars GitHub forks GitHub watchers Gitter

news: batch inference support

INTRODUCTION

The project is the encapsulation of nvidia official yolo-tensorrt implementation. And you must have the trained yolo model(.weights) and .cfg file from the darknet.

  • yolov3 , yolov3-tiny

  • yolov4 , yolov4-tiny

  • yolov5S

PLATFORM & PERFORMENCE

  • windows 10
  • ubuntu 18.04
  • L4T (Jetson platform)
model gpu precision detect time(with pre and post process)
yolov3-416x416 jetson nano (15w) FP16 250ms
yolov3-416x416 jetson xavier nx (15w 6core) FP32 120ms
yolov3-416x416 jetson xavier nx (15w 6core) FP16 45ms
yolov3-416x416 jetson xavier nx (15w 6core) INT8 35ms

WRAPPER

Prepare the pretrained .weights and .cfg model.

Detector detector;
Config config;

std::vector<BatchResult> res;
detector.detect(vec_image, res)

Build and use yolo-trt as DLL or SO libraries

windows10

  • dependency : TensorRT 7.1.3.4 , cuda 11.0 , cudnn 8.0 , opencv4 , vs2015

  • build:

    open MSVC sln/sln.sln file

    • dll project : the trt yolo detector dll
    • demo project : test of the dll

ubuntu & L4T (jetson)

The project generate the libdetector.so lib, and the sample code. If you want to use the libdetector.so lib in your own project,this cmake file perhaps could help you .

git clone https://github.com/enazoe/yolo-tensorrt.git
cd yolo-tensorrt/
mkdir build
cd build/
cmake ..
make
./yolo-trt
  • jetson nano JetPack 4.2.2

    note: set compute_53,code=sm_53 at cmake file.

  • jetson xavier nx JetPack 4.4

    note: set compute_72,code=sm_72 at cmake file.

API

struct Config
{
	std::string file_model_cfg = "configs/yolov4.cfg";

	std::string file_model_weights = "configs/yolov4.weights";

	float detect_thresh = 0.9;

	ModelType net_type = YOLOV4;

	Precision inference_precison = INT8;
	
	int gpu_id = 0;

	std::string calibration_image_list_file_txt = "configs/calibration_images.txt";

	int n_max_batch = 4;	
};

class API Detector
{
public:
	explicit Detector();
	~Detector();

	void init(const Config &config);

	void detect(const std::vector<cv::Mat> &mat_image,std::vector<BatchResult> &vec_batch_result);

private:
	Detector(const Detector &);
	const Detector &operator =(const Detector &);
	class Impl;
	Impl *_impl;
};

REFERENCE