/paddle_onnx_infer_cpp

infer code on paddle-detection yolo model with onnx format

Primary LanguagePython

YOLOv8-onnx Inference C++

This example demonstrates how to perform inference using YOLOv8-onnx-models,now support: paddledetection-v2.7 ultralytics-v8.1

Usage

# download onnx_runtime_lib
bash run.sh download_runtime_lib
# must change mount dir before 
bash run.sh mount
# create docker
bash run.sh create
bash run.sh build
# export model of engine & onnx, it depends on the hardware environment 
bash run.sh export
bash run.sh run
# 先去把paddledetection的tensorRT版本跑了
bash run.sh eval

more version of onnx-runtime-lib

download_url

your own

if you want to load your own onnx model, may need onnx-runtime