/opencv_lite

Integrated MNN into OpenCV and Use OpenCV API to run ONNX model by ONNXRuntime

Primary LanguageC++Apache License 2.0Apache-2.0

OpenCV-lite

OpenCV-lite is a lightweight version of OpenCV, tailored for DNN model deployment scenarios. The main modifications include:

  • Removing some less commonly used modules: features2d, flann, gapi, ml, objdetect, stitching, and video.
  • Retaining the dnn module API but directly utilizing MNN, ONNXRuntime, TFLite, and TensorRT for corresponding model inference.

  1. The API of opencv is easy to use, but the compatibility with ONNX model is poor.
  2. ONNXRuntime is very compatible with ONNX, but the API is hard to use and changes all the time.

The compatibility with ONNX model is poor.

It's a headache, user always encounter such error:

[ERROR:0@0.357] global onnx_importer.cpp:xxxx cv::dnn::xxxx::ONNXImporter::handleNode ...

In view of the fact that OpenCV DNN does not fully support dynamic shape input and has low coverage for ONNX. That means user may either in readNet(), or in net.forward(), always get an error. It is expected that after the release of OpenCV 5.0, things will improve.

If you have a model that needs to be inferred and deployed in a C++ environment, and you encounter errors above, maybe you can try this library.

In this project, I removed all dnn implementation codes, only kept the dnn's API. And connected to the C++ API of ONNXRuntime.

The ONNX op test coverage:

Project ONNX op coverage (%)
OpenCV DNN 30.22%**
OpenCV-ORT 91.69%*
ONNXRuntime 92.22%

**: Statistical methods:

(All_test - all_denylist - parser_denylist)/All_test = (867 - 56 - 549)/867 = 30.2%

*: the unsupported test case can be found here.

TODO List

  1. Fix some bugs in imgproc.
  2. Add Github Action.
  3. video demo.
  4. Add ORT-CUDA support, and compatible with net.setPreferableBackend(DNN_BACKEND_CUDA) API.

How to install?

Step1: Download ONNXRuntime binary package and unzip.

Please choose it by your platform. https://github.com/microsoft/onnxruntime/releases

I have tested it with ONNXRuntime version: 1.14.1, and it works well.

Step2: Set enviroment path

The keywork of ORT_SDK will be used in the OpenCV compiler.

export ORT_SDK=/opt/onnxruntime-osx-arm64-1.14.1 # Fix the ORT_SDK path.

Step3: Compile OpenCV_ORT from source code.

The compilation process is same from original OpenCV project. And only difference is that we need to set the one PATH:ORT, so that cmake can find ONNXRuntime lib file and ONNXRuntime head file correctly.

git clone https://github.com/zihaomu/opencv_ort.git
cd opencv_ort
mkdir build & cd build
cmake -D ORT_SDK=/opt/onnxruntime-osx-arm64-1.14.1 .. # Fix the ORT_SDK path.

How to use it?

The code is the totally same as original OpenCV DNN.

#include <iostream>
#include <vector>
#include <opencv2/dnn.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
#include<algorithm>

using namespace std;
using namespace cv;
using namespace cv::dnn;

int main()
{
    // load input
    Mat image = imread("PATH_TO_image");
    Scalar meanValue(0.485, 0.456, 0.406);
    Scalar stdValue(0.229, 0.224, 0.225);

    Mat blob = blobFromImage(image, 1.0/255.0, Size(224, 224), meanValue, true);
    blob /= stdValue;
    
    Net net = readNetFromONNX("PATH_TO_MODEL/resnet50-v1-12.onnx");

    std::vector<Mat> out;
    net.setInput(blob);
    net.forward(out);
    
    double min=0, max=0;
    Point minLoc, maxLoc;
    minMaxLoc(out[0], &min, &max, &minLoc, &maxLoc);
    cout<<"class num = "<<maxLoc.x<<std::endl;
}