cuda10.2 cudnn8.2.4 Tensorrt8.0.1.6 Opencv4.5.4
please follow official code (yolov5-6.2)
python segment/train.py --data coco128-seg.yaml --weights yolov5s-seg.pt --cfg yolov5s-seg.yaml
python export.py --data coco128-seg.yaml --weights yolov5s-seg.pt --cfg yolov5s-seg.yaml --include engine
You can directly export the engine model with the official code of yolov5/v6.2, but the exported model can only be used on your current computer. I suggest you export the onnx model first, and then use the code I provide. The advantage is that even if you change the computer configuration, you can generate the engine you need as long as there is an onnx
python export.py --data coco128-seg.yaml --weights yolov5s-seg.pt --cfg yolov5s-seg.yaml --include onnx
a file 'yolov5s-seg.onnx' will be generated.
cd download file path
copy 'yolov5s-seg.onnx' to models/
mkdir build
cd build
cmake ..
make
// file onnx2trt and trt_infer will be generated
sudo ./onnx2trt ../models/yolov5s-seg.onnx ../models/yolov5s-seg.engine
// a file 'yolov5s-seg.engine' will be generated.
sudo ./trt_infer ../models/yolov5s-seg.engine ../imagaes/street.jpg
'''
for (int i = 0; i < 10; i++) {//计算10次的推理速度
auto start = std::chrono::system_clock::now();
doInference(*context, data, prob, prob1, 1);
auto end = std::chrono::system_clock::now();
std::cout << std::chrono::duration_caststd::chrono::milliseconds(end - start).count() << "ms" << std::endl;
}
'''
The inference time is stable at 10ms (gtx 1080ti)