/YOLOv10-OpenVINO-CPP-Inference

YOLOv10 C++ implementation using OpenVINO for efficient and accurate real-time object detection.

Primary LanguageC++MIT LicenseMIT

YOLOv10 OpenVINO C++ Inference

Implementing YOLOv10 object detection using OpenVINO for efficient and accurate real-time inference in C++.

Features

  • Support for ONNX and OpenVINO IR model formats
  • Support for FP32, FP16 and INT8 precisions
  • Support for loading model with dynamic shape

Tested on Ubuntu 18.04, 20.04, 22.04.

Dependencies

Dependency Version
OpenVINO >=2023.3
OpenCV >=3.2.0
C++ >=14
CMake >=3.10.2

Installation Options

You have two options for setting up the environment: manually installing dependencies or using Docker.

Manual Installation

Install Dependencies

apt-get update
apt-get install -y \
    libtbb2 \
    cmake \
    make \
    git \
    libyaml-cpp-dev \
    wget \
    libopencv-dev \
    pkg-config \
    g++ \
    gcc \
    libc6-dev \
    make \
    build-essential \
    sudo \
    ocl-icd-libopencl1 \
    python3 \
    python3-venv \
    python3-pip \
    libpython3.8

Install OpenVINO

You can download OpenVINO from here.

wget -O openvino.tgz https://storage.openvinotoolkit.org/repositories/openvino/packages/2023.3/linux/l_openvino_toolkit_ubuntu20_2023.3.0.13775.ceeafaf64f3_x86_64.tgz && \
sudo mkdir /opt/intel
sudo mv openvino.tgz /opt/intel/
cd /opt/intel
sudo tar -xvf openvino.tgz
sudo rm openvino.tgz
sudo mv l_openvino* openvino
Using Docker

Building the Docker Image

To build the Docker image yourself, use the following command:

docker build . -t yolov10

Pulling the Docker Image

Alternatively, you can pull the pre-built Docker image from Docker Hub (available for Ubuntu 18.04, 20.04, and 22.04):

docker pull rlggyp/yolov10:18.04
docker pull rlggyp/yolov10:20.04
docker pull rlggyp/yolov10:22.04

For detailed usage information, please visit the Docker Hub repository page.

Running a Container

Grant the Docker container access to the X server by running the following command:

xhost +local:docker

To run a container from the image, use the following docker run command:

docker run -it --rm --mount type=bind,src=$(pwd),dst=/repo \
    --env DISPLAY=$DISPLAY \
    -v /tmp/.X11-unix:/tmp/.X11-unix \
    -v /dev:/dev \
    -w /repo \
    rlggyp/yolov10:<tag>

Build

git clone https://github.com/rlggyp/YOLOv10-OpenVINO-CPP-Inference.git
cd YOLOv10-OpenVINO-CPP-Inference/src

mkdir build
cd build
cmake ..
make

Usage

You can download the YOLOv10 model from here: ONNX, OpenVINO IR FP32, OpenVINO IR FP16, OpenVINO IR INT8

Using an ONNX Model Format

# For video input: 
./video <model_path.onnx> <video_path>
# For image input: 
./detect <model_path.onnx> <image_path>
# For real-time inference with a camera: 
./camera <model_path.onnx> <camera_index>

Using an OpenVINO IR Model Format

# For video input: 
./video <model_path.xml> <video_path>
# For image input: 
./detect <model_path.xml> <image_path>
# For real-time inference with a camera: 
./camera <model_path.xml> <camera_index>

traffic_gif result_bus result_zidane

References

Contributing

Contributions are welcome! If you have any suggestions, bug reports, or feature requests, please open an issue or submit a pull request.

License

This project is licensed under the MIT License. See the LICENSE file for details.