/lite.ai.toolkit

Lite.AI.ToolKit ๐Ÿš€๐Ÿš€๐ŸŒŸ: A lite C++ toolkit of awesome AI models(ไธ€ไธชๅผ€็ฎฑๅณ็”จ็š„C++ AIๅทฅๅ…ท็ฎฑ๏ผŒๆ”ฏๆŒONNXRuntime/NCNN/MNN/TNN), such as RobustVideoMatting๐Ÿ”ฅ, YOLOX๐Ÿ”ฅ, YOLOP๐Ÿ”ฅ etc. (https://github.com/DefTruth/lite.ai.toolkit/releases/tag/v0.1.0)

Primary LanguageC++GNU General Public License v3.0GPL-3.0

Lite.AI.ToolKit ๐Ÿš€๐Ÿš€๐ŸŒŸ: A lite C++ toolkit of awesome AI models.


English | ไธญๆ–‡ๆ–‡ๆกฃ | MacOS | Linux | Windows


Lite.AI.ToolKit ๐Ÿš€๐Ÿš€๐ŸŒŸ: A lite C++ toolkit of awesome AI models which contains 70+ models now. It's a collection of personal interests. Such as RVM, YOLOX, YOLOP, YOLOR, YoloV5, DeepLabV3, ArcFace, etc. emmm๐Ÿ˜ž ... it's not perfect yet. For now, let's regard it as a large collection of application cases for inference engines. Lite.AI.ToolKit based on ONNXRuntime C++ by default. I do have plans to reimplement it with NCNN, MNN and TNN, some models are already supported. Currently, I mainly consider its ease of use. Developers who need higher performance can make new optimizations based on the C++ implementation and ONNX files provided by this repo~ Welcome to open a new PR~ ๐Ÿ‘๐Ÿ‘‹, if you want to add a new model to this repo.

Core Features ๐Ÿš€๐Ÿš€๐ŸŒŸ

โค๏ธ Star ๐ŸŒŸ๐Ÿ‘†๐Ÿป this repo if it does any helps to you ~ ๐Ÿ™ƒ๐Ÿคช๐Ÿ€

Important Notes !!!

Expand for More Notes.

More Notes !!!

Contents.

1. Build Lite.AI.ToolKit ๐Ÿš€๐Ÿš€๐ŸŒŸ

Build the shared lib of Lite.AI.ToolKit for MacOS from sources. Note that Lite.AI.ToolKit uses onnxruntime as default backend, for the reason that onnxruntime supports the most of onnx's operators. Click โ–ถ๏ธ will show you the docs how to build Lite.AI.ToolKit ๐Ÿš€๐Ÿš€๐ŸŒŸ for Linux and Windows.

โš ๏ธ Linux and Windows.

Linux and Windows.

โš ๏ธ Lite.AI.ToolKit is not directly support Linux and Windows now. For Linux and Windows, you need to build or download(if have official builts) the shared libs of OpenCVใ€ONNXRuntime and any other Engines(like MNN, NCNN, TNN) firstly, then put the headers into the specific directories or just let these directories unchange(use the headers offer by this repo, the header file of the dependent library of this project is directly copied from the corresponding official library). However, the dynamic libraries under different operating systems need to be recompiled or downloaded. MacOS users can directly use the dynamic libraries of each dependent library provided by this project:

  • lite.ai.toolkit/opencv2
      cp -r you-path-to-downloaded-or-built-opencv/include/opencv4/opencv2 lite.ai.toolkit/opencv2
  • lite.ai.toolkit/onnxruntime
      cp -r you-path-to-downloaded-or-built-onnxruntime/include/onnxruntime lite.ai.toolkit/onnxruntime
  • lite.ai.toolkit/MNN
      cp -r you-path-to-downloaded-or-built-MNN/include/MNN lite.ai.toolkit/MNN
  • lite.ai.toolkit/ncnn
      cp -r you-path-to-downloaded-or-built-ncnn/include/ncnn lite.ai.toolkit/ncnn
  • lite.ai.toolkit/tnn
      cp -r you-path-to-downloaded-or-built-TNN/include/tnn lite.ai.toolkit/tnn

and put the libs into lite.ai.toolkit/lib directory. Please reference the build-docs1 for third_party.

  • lite.ai.toolkit/lib

      cp you-path-to-downloaded-or-built-opencv/lib/*opencv* lite.ai.toolkit/lib
      cp you-path-to-downloaded-or-built-onnxruntime/lib/*onnxruntime* lite.ai.toolkit/lib
      cp you-path-to-downloaded-or-built-MNN/lib/*MNN* lite.ai.toolkit/lib
      cp you-path-to-downloaded-or-built-ncnn/lib/*ncnn* lite.ai.toolkit/lib
      cp you-path-to-downloaded-or-built-TNN/lib/*TNN* lite.ai.toolkit/lib
  • Windows: You can reference to issue#6

  • Linux: The Docs and Docker image for Linux will be coming soon ~ issue#2

  • Happy News !!! : ๐Ÿš€ You can download the latest ONNXRuntime official built libs of Windows, Linux, MacOS and Arm !!! Both CPU and GPU versions are available. No more attentions needed pay to build it from source. Download the official built libs from v1.8.1. I have used version 1.7.0 for Lite.AI.ToolKit now, you can downlod it from v1.7.0, but version 1.8.1 should also work, I guess ~ ๐Ÿ™ƒ๐Ÿคช๐Ÿ€. For OpenCV, try to build from source(Linux) or down load the official built(Windows) from OpenCV 4.5.3. Then put the includes and libs into specific directory of Lite.AI.ToolKit.

    git clone --depth=1 https://github.com/DefTruth/lite.ai.toolkit.git  # latest
    cd lite.ai.toolkit && sh ./build.sh  # On MacOS, you can use the built OpenCV, ONNXRuntime, MNN, NCNN and TNN libs in this repo.
  • GPU Compatibility: See issue#10.

  • To link Lite.AI.ToolKit, you can follow the CMakeLists.txt listed belows.

cmake_minimum_required(VERSION 3.17)
project(lite.ai.toolkit.demo)

set(CMAKE_CXX_STANDARD 11)

# setting up lite.ai.toolkit
set(LITE_AI_DIR ${CMAKE_SOURCE_DIR}/lite.ai.toolkit)
set(LITE_AI_INCLUDE_DIR ${LITE_AI_DIR}/include)
set(LITE_AI_LIBRARY_DIR ${LITE_AI_DIR}/lib)
include_directories(${LITE_AI_INCLUDE_DIR})
link_directories(${LITE_AI_LIBRARY_DIR})

set(OpenCV_LIBS
        opencv_highgui
        opencv_core
        opencv_imgcodecs
        opencv_imgproc
        opencv_video
        opencv_videoio
        )
# add your executable
set(EXECUTABLE_OUTPUT_PATH ${CMAKE_SOURCE_DIR}/examples/build)

add_executable(lite_rvm examples/test_lite_rvm.cpp)
target_link_libraries(lite_rvm
        lite.ai.toolkit
        onnxruntime
        MNN  # need, if built lite.ai.toolkit with ENABLE_MNN=ON,  default OFF
        ncnn # need, if built lite.ai.toolkit with ENABLE_NCNN=ON, default OFF 
        TNN  # need, if built lite.ai.toolkit with ENABLE_TNN=ON,  default OFF 
        ${OpenCV_LIBS})  # link lite.ai.toolkit & other libs.
Expand for more details of How to link the shared lib of Lite.AI.ToolKit?
cd ./build/lite.ai.toolkit/lib && otool -L liblite.ai.toolkit.0.0.1.dylib 
liblite.ai.toolkit.0.0.1.dylib:
        @rpath/liblite.ai.toolkit.0.0.1.dylib (compatibility version 0.0.1, current version 0.0.1)
        @rpath/libopencv_highgui.4.5.dylib (compatibility version 4.5.0, current version 4.5.2)
        @rpath/libonnxruntime.1.7.0.dylib (compatibility version 0.0.0, current version 1.7.0)
        ...
cd ../ && tree .
โ”œโ”€โ”€ bin
โ”œโ”€โ”€ include
โ”‚   โ”œโ”€โ”€ lite
โ”‚   โ”‚   โ”œโ”€โ”€ backend.h
โ”‚   โ”‚   โ”œโ”€โ”€ config.h
โ”‚   โ”‚   โ””โ”€โ”€ lite.h
โ”‚   โ””โ”€โ”€ ort
โ””โ”€โ”€ lib
    โ””โ”€โ”€ liblite.ai.toolkit.0.0.1.dylib
  • Run the built examples:
cd ./build/lite.ai.toolkit/bin && ls -lh | grep lite
-rwxr-xr-x  1 root  staff   301K Jun 26 23:10 liblite.ai.toolkit.0.0.1.dylib
...
-rwxr-xr-x  1 root  staff   196K Jun 26 23:10 lite_yolov4
-rwxr-xr-x  1 root  staff   196K Jun 26 23:10 lite_yolov5
...
./lite_yolov5
LITEORT_DEBUG LogId: ../../../hub/onnx/cv/yolov5s.onnx
=============== Input-Dims ==============
...
detected num_anchors: 25200
generate_bboxes num: 66
Default Version Detected Boxes Num: 5
  • To link lite.ai.toolkit shared lib. You need to make sure that OpenCV and onnxruntime are linked correctly. Just like:
cmake_minimum_required(VERSION 3.17)
project(lite.ai.toolkit.demo)

set(CMAKE_CXX_STANDARD 11)

# setting up lite.ai.toolkit
set(LITE_AI_DIR ${CMAKE_SOURCE_DIR}/lite.ai.toolkit)
set(LITE_AI_INCLUDE_DIR ${LITE_AI_DIR}/include)
set(LITE_AI_LIBRARY_DIR ${LITE_AI_DIR}/lib)
include_directories(${LITE_AI_INCLUDE_DIR})
link_directories(${LITE_AI_LIBRARY_DIR})

set(OpenCV_LIBS
        opencv_highgui
        opencv_core
        opencv_imgcodecs
        opencv_imgproc
        opencv_video
        opencv_videoio
        )
# add your executable
set(EXECUTABLE_OUTPUT_PATH ${CMAKE_SOURCE_DIR}/examples/build)

add_executable(lite_rvm examples/test_lite_rvm.cpp)
target_link_libraries(lite_rvm
        lite.ai.toolkit
        onnxruntime
        MNN
        ncnn
        ${OpenCV_LIBS})  # link lite.ai.toolkit & other libs.

A minimum example to show you how to link the shared lib of Lite.AI.ToolKit correctly for your own project can be found at lite.ai.toolkit.demo.

2. Model Zoo.

Lite.AI.ToolKit contains 70+ AI models with 300+ frozen pretrained .onnx/.mnn/.param&bin(ncnn)/.tnnmodel&tnnproto files now. Most of the files are converted by myself. You can use it through lite::cv::Type::Class syntax, such as lite::cv::detection::YoloV5. More details can be found at Examples for Lite.AI.ToolKit.

Expand Details for Namespace and Lite.AI.ToolKit modules.

Namespace and Lite.AI.ToolKit modules.

Namepace Details
lite::cv::detection Object Detection. one-stage and anchor-free detectors, YoloV5, YoloV4, SSD, etc. โœ…
lite::cv::classification Image Classification. DensNet, ShuffleNet, ResNet, IBNNet, GhostNet, etc. โœ…
lite::cv::faceid Face Recognition. ArcFace, CosFace, CurricularFace, etc. โ‡๏ธ
lite::cv::face Face Analysis. detect, align, pose, attr, etc. โ‡๏ธ
lite::cv::face::detect Face Detection. UltraFace, RetinaFace, FaceBoxes, PyramidBox, etc. โ‡๏ธ
lite::cv::face::align Face Alignment. PFLD(106), FaceLandmark1000(1000 landmarks), PRNet, etc. โ‡๏ธ
lite::cv::face::pose Head Pose Estimation. FSANet, etc. โ‡๏ธ
lite::cv::face::attr Face Attributes. Emotion, Age, Gender. EmotionFerPlus, VGG16Age, etc. โ‡๏ธ
lite::cv::segmentation Object Segmentation. Such as FCN, DeepLabV3, etc. โš ๏ธ
lite::cv::style Style Transfer. Contains neural style transfer now, such as FastStyleTransfer. โš ๏ธ
lite::cv::matting Image Matting. Object and Human matting. โš ๏ธ
lite::cv::colorization Colorization. Make Gray image become RGB. โš ๏ธ
lite::cv::resolution Super Resolution. โš ๏ธ

Lite.AI.ToolKit's Classes and Pretrained Files.

Correspondence between the classes in Lite.AI.ToolKit and pretrained model files can be found at lite.ai.toolkit.hub.onnx.md. For examples, the pretrained model files for lite::cv::detection::YoloV5 and lite::cv::detection::YoloX are listed as follows.

Class Pretrained ONNX Files Rename or Converted From (Repo) Size
lite::cv::detection::YoloV5 yolov5l.onnx yolov5 (๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ’ฅโ†‘) 188Mb
lite::cv::detection::YoloV5 yolov5m.onnx yolov5 (๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ’ฅโ†‘) 85Mb
lite::cv::detection::YoloV5 yolov5s.onnx yolov5 (๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ’ฅโ†‘) 29Mb
lite::cv::detection::YoloV5 yolov5x.onnx yolov5 (๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ’ฅโ†‘) 351Mb
lite::cv::detection::YoloX yolox_x.onnx YOLOX (๐Ÿ”ฅ๐Ÿ”ฅ!!โ†‘) 378Mb
lite::cv::detection::YoloX yolox_l.onnx YOLOX (๐Ÿ”ฅ๐Ÿ”ฅ!!โ†‘) 207Mb
lite::cv::detection::YoloX yolox_m.onnx YOLOX (๐Ÿ”ฅ๐Ÿ”ฅ!!โ†‘) 97Mb
lite::cv::detection::YoloX yolox_s.onnx YOLOX (๐Ÿ”ฅ๐Ÿ”ฅ!!โ†‘) 34Mb
lite::cv::detection::YoloX yolox_tiny.onnx YOLOX (๐Ÿ”ฅ๐Ÿ”ฅ!!โ†‘) 19Mb
lite::cv::detection::YoloX yolox_nano.onnx YOLOX (๐Ÿ”ฅ๐Ÿ”ฅ!!โ†‘) 3.5Mb

It means that you can load the the any one yolov5*.onnx and yolox_*.onnx according to your application through the same Lite.AI.ToolKit's classes, such as YoloV5, YoloX, etc.

auto *yolov5 = new lite::cv::detection::YoloV5("yolov5x.onnx");  // for server
auto *yolov5 = new lite::cv::detection::YoloV5("yolov5l.onnx"); 
auto *yolov5 = new lite::cv::detection::YoloV5("yolov5m.onnx");  
auto *yolov5 = new lite::cv::detection::YoloV5("yolov5s.onnx");  // for mobile device 
auto *yolox = new lite::cv::detection::YoloX("yolox_x.onnx");  
auto *yolox = new lite::cv::detection::YoloX("yolox_l.onnx");  
auto *yolox = new lite::cv::detection::YoloX("yolox_m.onnx");  
auto *yolox = new lite::cv::detection::YoloX("yolox_s.onnx");  
auto *yolox = new lite::cv::detection::YoloX("yolox_tiny.onnx");  
auto *yolox = new lite::cv::detection::YoloX("yolox_nano.onnx");  // 3.5Mb only !
Class Size From Awesome File Type State Usage
YoloV5 28M yolov5 ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ’ฅโ†‘ detection โœ… demo
YoloV3 236M onnx-models ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ detection โœ… demo
TinyYoloV3 33M onnx-models ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ detection โœ… demo
YoloV4 176M YOLOv4... ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ detection โœ… demo
SSD 76M onnx-models ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ detection โœ… demo
SSDMobileNetV1 27M onnx-models ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ detection โœ… demo
YoloX 3.5M YOLOX ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ detection โœ… demo
TinyYoloV4VOC 22M yolov4-tiny... ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ detection โœ… demo
TinyYoloV4COCO 22M yolov4-tiny... ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ detection โœ… demo
YoloR 39M yolor ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ detection โœ… demo
ScaledYoloV4 270M ScaledYOLOv4 ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ detection โœ… demo
EfficientDet 15M ...EfficientDet... ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ detection โœ… demo
EfficientDetD7 220M ...EfficientDet... ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ detection โœ… demo
EfficientDetD8 322M ...EfficientDet... ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ detection โœ… demo
YOLOP 30M YOLOP ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ detection โœ… demo
NanoDet 1.1M nanodet ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ detection โœ… demo
NanoDetEfficientNetLite 12M nanodet ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ detection โœ… demo
Class Size From Awesome File Type State Usage
YoloX 3.5M YOLOX ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ detection โœ… demo
YOLOP 30M YOLOP ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ detection โœ… demo
NanoDet 1.1M nanodet ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ detection โœ… demo
NanoDetEfficientNetLite 12M nanodet ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ detection โœ… demo
Class Size From Awesome File Type State Usage
YoloX 3.5M YOLOX ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ detection โœ… demo
NanoDet 1.1M nanodet ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ detection โœ… demo
NanoDetEfficientNetLite 12M nanodet ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ detection โœ… demo
NanoDetDepreciated 1.1M nanodet ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ detection โœ… demo
NanoDetEfficientNetLiteD... 12M nanodet ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ detection โœ… demo
Class Size From Awesome File Type State Usage
YoloX 3.5M YOLOX ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ detection โœ… demo
YOLOP 30M YOLOP ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ detection โœ… demo
NanoDet 1.1M nanodet ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ detection โœ… demo
NanoDetEfficientNetLite 12M nanodet ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ detection โœ… demo
  • Face Recognition.
Class Size From Awesome File Type State Usage
GlintArcFace 92M insightface ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ faceid โœ… demo
GlintCosFace 92M insightface ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ faceid โœ… demo
GlintPartialFC 170M insightface ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ faceid โœ… demo
FaceNet 89M facenet... ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ faceid โœ… demo
FocalArcFace 166M face.evoLVe... ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ faceid โœ… demo
FocalAsiaArcFace 166M face.evoLVe... ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ faceid โœ… demo
TencentCurricularFace 249M TFace ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ faceid โœ… demo
TencentCifpFace 130M TFace ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ faceid โœ… demo
CenterLossFace 280M center-loss... ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ faceid โœ… demo
SphereFace 80M sphere... ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ faceid โœ…๏ธ demo
PoseRobustFace 92M DREAM ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ faceid โœ…๏ธ demo
NaivePoseRobustFace 43M DREAM ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ faceid โœ…๏ธ demo
MobileFaceNet 3.8M MobileFace... ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ faceid โœ… demo
CavaGhostArcFace 15M cavaface... ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ faceid โœ… demo
CavaCombinedFace 250M cavaface... ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ faceid โœ… demo
MobileSEFocalFace 4.5M face_recog... ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ faceid โœ… demo
  • Matting.
Class Size From Awesome File Type State Usage
RobustVideoMatting 14M RobustVideoMatting ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ matting โœ… demo
Class Size From Awesome File Type State Usage
RobustVideoMatting 14M RobustVideoMatting ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ matting โœ… demo
Class Size From Awesome File Type State Usage
RobustVideoMatting 14M RobustVideoMatting ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ matting โš ๏ธ code
Class Size From Awesome File Type State Usage
RobustVideoMatting 14M RobustVideoMatting ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ matting โœ…๏ธ demo
โš ๏ธ Expand More Details for Lite.AI.ToolKit's Model Zoo.
  • Face Detection.
Class Size From Awesome File Type State Usage
UltraFace 1.1M Ultra-Light... ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ face::detect โœ… demo
RetinaFace 1.6M ...Retinaface ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ face::detect โœ… demo
FaceBoxes 3.8M FaceBoxes ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ face::detect โœ… demo
  • Face Alignment.
Class Size From Awesome File Type State Usage
PFLD 1.0M pfld_106_... ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ face::align โœ… demo
PFLD98 4.8M PFLD... ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ face::align โœ…๏ธ demo
MobileNetV268 9.4M ...landmark ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ face::align โœ…๏ธ๏ธ demo
MobileNetV2SE68 11M ...landmark ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ face::align โœ…๏ธ๏ธ demo
PFLD68 2.8M ...landmark ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ face::align โœ…๏ธ demo
FaceLandmark1000 2.0M FaceLandm... ๐Ÿ”ฅโ†‘ face::align โœ…๏ธ demo
  • Head Pose Estimation.
Class Size From Awesome File Type State Usage
FSANet 1.2M ...fsanet... ๐Ÿ”ฅโ†‘ face::pose โœ… demo
  • Face Attributes.
Class Size From Awesome File Type State Usage
AgeGoogleNet 23M onnx-models ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ face::attr โœ… demo
GenderGoogleNet 23M onnx-models ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ face::attr โœ… demo
EmotionFerPlus 33M onnx-models ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ face::attr โœ… demo
VGG16Age 514M onnx-models ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ face::attr โœ… demo
VGG16Gender 512M onnx-models ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ face::attr โœ… demo
SSRNet 190K SSR_Net... ๐Ÿ”ฅโ†‘ face::attr โœ… demo
EfficientEmotion7 15M face-emo... ๐Ÿ”ฅโ†‘ face::attr โœ…๏ธ demo
EfficientEmotion8 15M face-emo... ๐Ÿ”ฅโ†‘ face::attr โœ… demo
MobileEmotion7 13M face-emo... ๐Ÿ”ฅโ†‘ face::attr โœ… demo
ReXNetEmotion7 30M face-emo... ๐Ÿ”ฅโ†‘ face::attr โœ… demo
  • Classification.
Class Size From Awesome File Type State Usage
EfficientNetLite4 49M onnx-models ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ classification โœ… demo
ShuffleNetV2 8.7M onnx-models ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ classification โœ… demo
DenseNet121 30.7M torchvision ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ classification โœ… demo
GhostNet 20M torchvision ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ classification โœ… demo
HdrDNet 13M torchvision ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ classification โœ… demo
IBNNet 97M torchvision ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ classification โœ… demo
MobileNetV2 13M torchvision ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ classification โœ… demo
ResNet 44M torchvision ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ classification โœ… demo
ResNeXt 95M torchvision ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ classification โœ… demo
  • Segmentation.
Class Size From Awesome File Type State Usage
DeepLabV3ResNet101 232M torchvision ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ segmentation โœ… demo
FCNResNet101 207M torchvision ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ segmentation โœ… demo
  • Style Transfer.
Class Size From Awesome File Type State Usage
FastStyleTransfer 6.4M onnx-models ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ style โœ… demo
  • Colorization.
Class Size From Awesome File Type State Usage
Colorizer 123M colorization ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅโ†‘ colorization โœ… demo
  • Super Resolution.
Class Size From Awesome File Type State Usage
SubPixelCNN 234K ...PIXEL... ๐Ÿ”ฅโ†‘ resolution โœ… demo

3. Examples for Lite.AI.ToolKit.

More examples can be found at lite.ai.toolkit.examples. Click โ–ถ๏ธ will show you more examples for the specific topic you are interested in.

Example0: Object Detection using YoloV5. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/yolov5s.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_yolov5_1.jpg";
  std::string save_img_path = "../../../logs/test_lite_yolov5_1.jpg";

  auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path); 
  std::vector<lite::types::Boxf> detected_boxes;
  cv::Mat img_bgr = cv::imread(test_img_path);
  yolov5->detect(img_bgr, detected_boxes);
  
  lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
  cv::imwrite(save_img_path, img_bgr);  
  
  delete yolov5;
}

The output is:

Or you can use Newest ๐Ÿ”ฅ๐Ÿ”ฅ ! YOLO series's detector YOLOX or YoloR. They got the similar results.


Example1: Video Matting using RobustVideoMatting2021๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/rvm_mobilenetv3_fp32.onnx";
  std::string video_path = "../../../examples/lite/resources/test_lite_rvm_0.mp4";
  std::string output_path = "../../../logs/test_lite_rvm_0.mp4";
  
  auto *rvm = new lite::cv::matting::RobustVideoMatting(onnx_path, 16); // 16 threads
  std::vector<lite::types::MattingContent> contents;
  
  // 1. video matting.
  rvm->detect_video(video_path, output_path, contents, false, 0.4f);
  
  delete rvm;
}

The output is:



Example2: 1000 Facial Landmarks Detection using FaceLandmarks1000. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/FaceLandmark1000.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_face_landmarks_0.png";
  std::string save_img_path = "../../../logs/test_lite_face_landmarks_1000.jpg";
    
  auto *face_landmarks_1000 = new lite::cv::face::align::FaceLandmark1000(onnx_path);

  lite::types::Landmarks landmarks;
  cv::Mat img_bgr = cv::imread(test_img_path);
  face_landmarks_1000->detect(img_bgr, landmarks);
  lite::utils::draw_landmarks_inplace(img_bgr, landmarks);
  cv::imwrite(save_img_path, img_bgr);
  
  delete face_landmarks_1000;
}

The output is:


Example3: Colorization using colorization. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/eccv16-colorizer.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_colorizer_1.jpg";
  std::string save_img_path = "../../../logs/test_lite_eccv16_colorizer_1.jpg";
  
  auto *colorizer = new lite::cv::colorization::Colorizer(onnx_path);
  
  cv::Mat img_bgr = cv::imread(test_img_path);
  lite::types::ColorizeContent colorize_content;
  colorizer->detect(img_bgr, colorize_content);
  
  if (colorize_content.flag) cv::imwrite(save_img_path, colorize_content.mat);
  delete colorizer;
}

The output is:



Example4: Face Recognition using ArcFace. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx";
  std::string test_img_path0 = "../../../examples/lite/resources/test_lite_faceid_0.png";
  std::string test_img_path1 = "../../../examples/lite/resources/test_lite_faceid_1.png";
  std::string test_img_path2 = "../../../examples/lite/resources/test_lite_faceid_2.png";

  auto *glint_arcface = new lite::cv::faceid::GlintArcFace(onnx_path);

  lite::types::FaceContent face_content0, face_content1, face_content2;
  cv::Mat img_bgr0 = cv::imread(test_img_path0);
  cv::Mat img_bgr1 = cv::imread(test_img_path1);
  cv::Mat img_bgr2 = cv::imread(test_img_path2);
  glint_arcface->detect(img_bgr0, face_content0);
  glint_arcface->detect(img_bgr1, face_content1);
  glint_arcface->detect(img_bgr2, face_content2);

  if (face_content0.flag && face_content1.flag && face_content2.flag)
  {
    float sim01 = lite::utils::math::cosine_similarity<float>(
        face_content0.embedding, face_content1.embedding);
    float sim02 = lite::utils::math::cosine_similarity<float>(
        face_content0.embedding, face_content2.embedding);
    std::cout << "Detected Sim01: " << sim  << " Sim02: " << sim02 << std::endl;
  }

  delete glint_arcface;
}

The output is:

Detected Sim01: 0.721159 Sim02: -0.0626267


Example5: Face Detection using UltraFace. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/ultraface-rfb-640.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_ultraface.jpg";
  std::string save_img_path = "../../../logs/test_lite_ultraface.jpg";

  auto *ultraface = new lite::cv::face::detect::UltraFace(onnx_path);

  std::vector<lite::types::Boxf> detected_boxes;
  cv::Mat img_bgr = cv::imread(test_img_path);
  ultraface->detect(img_bgr, detected_boxes);
  lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
  cv::imwrite(save_img_path, img_bgr);

  delete ultraface;
}

The output is:

โš ๏ธ Expand All Examples for Each Topic in Lite.AI.ToolKit
3.1 Expand Examples for Object Detection.

3.1 Object Detection using YoloV5. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/yolov5s.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_yolov5_1.jpg";
  std::string save_img_path = "../../../logs/test_lite_yolov5_1.jpg";
  
  auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path);
  std::vector<lite::types::Boxf> detected_boxes;
  cv::Mat img_bgr = cv::imread(test_img_path);
  yolov5->detect(img_bgr, detected_boxes);
  
  lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
  cv::imwrite(save_img_path, img_bgr);
  
  delete yolov5;
}

The output is:

Or you can use Newest ๐Ÿ”ฅ๐Ÿ”ฅ ! YOLO series's detector YOLOX . They got the similar results.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/yolox_s.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_yolox_1.jpg";
  std::string save_img_path = "../../../logs/test_lite_yolox_1.jpg";

  auto *yolox = new lite::cv::detection::YoloX(onnx_path); 
  std::vector<lite::types::Boxf> detected_boxes;
  cv::Mat img_bgr = cv::imread(test_img_path);
  yolox->detect(img_bgr, detected_boxes);
  
  lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
  cv::imwrite(save_img_path, img_bgr);  
  
  delete yolox;
}

The output is:

More classes for general object detection.

auto *detector = new lite::cv::detection::YoloX(onnx_path);  // Newest YOLO detector !!! 2021-07
auto *detector = new lite::cv::detection::YoloV4(onnx_path); 
auto *detector = new lite::cv::detection::YoloV3(onnx_path); 
auto *detector = new lite::cv::detection::TinyYoloV3(onnx_path); 
auto *detector = new lite::cv::detection::SSD(onnx_path); 
auto *detector = new lite::cv::detection::YoloV5(onnx_path); 
auto *detector = new lite::cv::detection::YoloR(onnx_path);  // Newest YOLO detector !!! 2021-05
auto *detector = new lite::cv::detection::TinyYoloV4VOC(onnx_path); 
auto *detector = new lite::cv::detection::TinyYoloV4COCO(onnx_path); 
auto *detector = new lite::cv::detection::ScaledYoloV4(onnx_path); 
auto *detector = new lite::cv::detection::EfficientDet(onnx_path); 
auto *detector = new lite::cv::detection::EfficientDetD7(onnx_path); 
auto *detector = new lite::cv::detection::EfficientDetD8(onnx_path); 
auto *detector = new lite::cv::detection::YOLOP(onnx_path);
auto *detector = new lite::cv::detection::NanoDet(onnx_path); // Super fast and tiny!
auto *detector = new lite::cv::detection::NanoDetEfficientNetLite(onnx_path); // Super fast and tiny!
3.2 Expand Examples for Face Recognition.

3.2 Face Recognition using ArcFace. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx";
  std::string test_img_path0 = "../../../examples/lite/resources/test_lite_faceid_0.png";
  std::string test_img_path1 = "../../../examples/lite/resources/test_lite_faceid_1.png";
  std::string test_img_path2 = "../../../examples/lite/resources/test_lite_faceid_2.png";

  auto *glint_arcface = new lite::cv::faceid::GlintArcFace(onnx_path);

  lite::types::FaceContent face_content0, face_content1, face_content2;
  cv::Mat img_bgr0 = cv::imread(test_img_path0);
  cv::Mat img_bgr1 = cv::imread(test_img_path1);
  cv::Mat img_bgr2 = cv::imread(test_img_path2);
  glint_arcface->detect(img_bgr0, face_content0);
  glint_arcface->detect(img_bgr1, face_content1);
  glint_arcface->detect(img_bgr2, face_content2);

  if (face_content0.flag && face_content1.flag && face_content2.flag)
  {
    float sim01 = lite::utils::math::cosine_similarity<float>(
        face_content0.embedding, face_content1.embedding);
    float sim02 = lite::utils::math::cosine_similarity<float>(
        face_content0.embedding, face_content2.embedding);
    std::cout << "Detected Sim01: " << sim  << " Sim02: " << sim02 << std::endl;
  }

  delete glint_arcface;
}

The output is:

Detected Sim01: 0.721159 Sim02: -0.0626267

More classes for face recognition.

auto *recognition = new lite::cv::faceid::GlintCosFace(onnx_path);  // DeepGlint(insightface)
auto *recognition = new lite::cv::faceid::GlintArcFace(onnx_path);  // DeepGlint(insightface)
auto *recognition = new lite::cv::faceid::GlintPartialFC(onnx_path); // DeepGlint(insightface)
auto *recognition = new lite::cv::faceid::FaceNet(onnx_path);
auto *recognition = new lite::cv::faceid::FocalArcFace(onnx_path);
auto *recognition = new lite::cv::faceid::FocalAsiaArcFace(onnx_path);
auto *recognition = new lite::cv::faceid::TencentCurricularFace(onnx_path); // Tencent(TFace)
auto *recognition = new lite::cv::faceid::TencentCifpFace(onnx_path); // Tencent(TFace)
auto *recognition = new lite::cv::faceid::CenterLossFace(onnx_path);
auto *recognition = new lite::cv::faceid::SphereFace(onnx_path);
auto *recognition = new lite::cv::faceid::PoseRobustFace(onnx_path);
auto *recognition = new lite::cv::faceid::NaivePoseRobustFace(onnx_path);
auto *recognition = new lite::cv::faceid::MobileFaceNet(onnx_path); // 3.8Mb only !
auto *recognition = new lite::cv::faceid::CavaGhostArcFace(onnx_path);
auto *recognition = new lite::cv::faceid::CavaCombinedFace(onnx_path);
auto *recognition = new lite::cv::faceid::MobileSEFocalFace(onnx_path); // 4.5Mb only !
3.3 Expand Examples for Segmentation.

3.3 Segmentation using DeepLabV3ResNet101. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/deeplabv3_resnet101_coco.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_deeplabv3_resnet101.png";
  std::string save_img_path = "../../../logs/test_lite_deeplabv3_resnet101.jpg";

  auto *deeplabv3_resnet101 = new lite::cv::segmentation::DeepLabV3ResNet101(onnx_path, 16); // 16 threads

  lite::types::SegmentContent content;
  cv::Mat img_bgr = cv::imread(test_img_path);
  deeplabv3_resnet101->detect(img_bgr, content);

  if (content.flag)
  {
    cv::Mat out_img;
    cv::addWeighted(img_bgr, 0.2, content.color_mat, 0.8, 0., out_img);
    cv::imwrite(save_img_path, out_img);
    if (!content.names_map.empty())
    {
      for (auto it = content.names_map.begin(); it != content.names_map.end(); ++it)
      {
        std::cout << it->first << " Name: " << it->second << std::endl;
      }
    }
  }
  delete deeplabv3_resnet101;
}

The output is:

More classes for segmentation.

auto *segment = new lite::cv::segmentation::FCNResNet101(onnx_path);
3.4 Expand Examples for Face Attributes Analysis.

3.4 Age Estimation using SSRNet . Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/ssrnet.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_ssrnet.jpg";
  std::string save_img_path = "../../../logs/test_lite_ssrnet.jpg";

  lite::cv::face::attr::SSRNet *ssrnet = new lite::cv::face::attr::SSRNet(onnx_path);

  lite::types::Age age;
  cv::Mat img_bgr = cv::imread(test_img_path);
  ssrnet->detect(img_bgr, age);
  lite::utils::draw_age_inplace(img_bgr, age);
  cv::imwrite(save_img_path, img_bgr);
  std::cout << "Default Version Done! Detected SSRNet Age: " << age.age << std::endl;

  delete ssrnet;
}

The output is:

More classes for face attributes analysis.

auto *attribute = new lite::cv::face::attr::AgeGoogleNet(onnx_path);  
auto *attribute = new lite::cv::face::attr::GenderGoogleNet(onnx_path); 
auto *attribute = new lite::cv::face::attr::EmotionFerPlus(onnx_path);
auto *attribute = new lite::cv::face::attr::VGG16Age(onnx_path);
auto *attribute = new lite::cv::face::attr::VGG16Gender(onnx_path);
auto *attribute = new lite::cv::face::attr::EfficientEmotion7(onnx_path); // 7 emotions, 15Mb only!
auto *attribute = new lite::cv::face::attr::EfficientEmotion8(onnx_path); // 8 emotions, 15Mb only!
auto *attribute = new lite::cv::face::attr::MobileEmotion7(onnx_path); // 7 emotions
auto *attribute = new lite::cv::face::attr::ReXNetEmotion7(onnx_path); // 7 emotions
3.5 Expand Examples for Image Classification.

3.5 1000 Classes Classification using DenseNet. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/densenet121.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_densenet.jpg";

  auto *densenet = new lite::cv::classification::DenseNet(onnx_path);

  lite::types::ImageNetContent content;
  cv::Mat img_bgr = cv::imread(test_img_path);
  densenet->detect(img_bgr, content);
  if (content.flag)
  {
    const unsigned int top_k = content.scores.size();
    if (top_k > 0)
    {
      for (unsigned int i = 0; i < top_k; ++i)
        std::cout << i + 1
                  << ": " << content.labels.at(i)
                  << ": " << content.texts.at(i)
                  << ": " << content.scores.at(i)
                  << std::endl;
    }
  }
  delete densenet;
}

The output is:

More classes for image classification.

auto *classifier = new lite::cv::classification::EfficientNetLite4(onnx_path);  
auto *classifier = new lite::cv::classification::ShuffleNetV2(onnx_path); 
auto *classifier = new lite::cv::classification::GhostNet(onnx_path);
auto *classifier = new lite::cv::classification::HdrDNet(onnx_path);
auto *classifier = new lite::cv::classification::IBNNet(onnx_path);
auto *classifier = new lite::cv::classification::MobileNetV2(onnx_path); 
auto *classifier = new lite::cv::classification::ResNet(onnx_path); 
auto *classifier = new lite::cv::classification::ResNeXt(onnx_path);
3.6 Expand Examples for Face Detection.

3.6 Face Detection using UltraFace. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/ultraface-rfb-640.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_ultraface.jpg";
  std::string save_img_path = "../../../logs/test_lite_ultraface.jpg";

  auto *ultraface = new lite::cv::face::detect::UltraFace(onnx_path);

  std::vector<lite::types::Boxf> detected_boxes;
  cv::Mat img_bgr = cv::imread(test_img_path);
  ultraface->detect(img_bgr, detected_boxes);
  lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
  cv::imwrite(save_img_path, img_bgr);

  delete ultraface;
}

The output is:

More classes for face detection.

auto *detector = new lite::face::detect::UltraFace(onnx_path);  // 1.1Mb only !
auto *detector = new lite::face::detect::FaceBoxes(onnx_path);  // 3.8Mb only ! 
auto *detector = new lite::face::detect::RetinaFace(onnx_path);  // 1.6Mb only ! CVPR2020
3.7 Expand Examples for Colorization.

3.7 Colorization using colorization. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/eccv16-colorizer.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_colorizer_1.jpg";
  std::string save_img_path = "../../../logs/test_lite_eccv16_colorizer_1.jpg";
  
  auto *colorizer = new lite::cv::colorization::Colorizer(onnx_path);
  
  cv::Mat img_bgr = cv::imread(test_img_path);
  lite::types::ColorizeContent colorize_content;
  colorizer->detect(img_bgr, colorize_content);
  
  if (colorize_content.flag) cv::imwrite(save_img_path, colorize_content.mat);
  delete colorizer;
}

The output is:


3.8 Expand Examples for Head Pose Estimation.

3.8 Head Pose Estimation using FSANet. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/fsanet-var.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_fsanet.jpg";
  std::string save_img_path = "../../../logs/test_lite_fsanet.jpg";

  auto *fsanet = new lite::cv::face::pose::FSANet(onnx_path);
  cv::Mat img_bgr = cv::imread(test_img_path);
  lite::types::EulerAngles euler_angles;
  fsanet->detect(img_bgr, euler_angles);
  
  if (euler_angles.flag)
  {
    lite::utils::draw_axis_inplace(img_bgr, euler_angles);
    cv::imwrite(save_img_path, img_bgr);
    std::cout << "yaw:" << euler_angles.yaw << " pitch:" << euler_angles.pitch << " row:" << euler_angles.roll << std::endl;
  }
  delete fsanet;
}

The output is:

3.9 Expand Examples for Face Alignment.

3.9 1000 Facial Landmarks Detection using FaceLandmarks1000. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/FaceLandmark1000.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_face_landmarks_0.png";
  std::string save_img_path = "../../../logs/test_lite_face_landmarks_1000.jpg";
    
  auto *face_landmarks_1000 = new lite::cv::face::align::FaceLandmark1000(onnx_path);

  lite::types::Landmarks landmarks;
  cv::Mat img_bgr = cv::imread(test_img_path);
  face_landmarks_1000->detect(img_bgr, landmarks);
  lite::utils::draw_landmarks_inplace(img_bgr, landmarks);
  cv::imwrite(save_img_path, img_bgr);
  
  delete face_landmarks_1000;
}

The output is:

More classes for face alignment.

auto *align = new lite::cv::face::align::PFLD(onnx_path);  // 106 landmarks
auto *align = new lite::cv::face::align::PFLD98(onnx_path);  // 98 landmarks
auto *align = new lite::cv::face::align::PFLD68(onnx_path);  // 68 landmarks
auto *align = new lite::cv::face::align::MobileNetV268(onnx_path);  // 68 landmarks
auto *align = new lite::cv::face::align::MobileNetV2SE68(onnx_path);  // 68 landmarks
auto *align = new lite::cv::face::align::FaceLandmark1000(onnx_path);  // 1000 landmarks !
3.10 Expand Examples for Style Transfer.

3.10 Style Transfer using FastStyleTransfer. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/style-candy-8.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_fast_style_transfer.jpg";
  std::string save_img_path = "../../../logs/test_lite_fast_style_transfer_candy.jpg";
  
  auto *fast_style_transfer = new lite::cv::style::FastStyleTransfer(onnx_path);
 
  lite::types::StyleContent style_content;
  cv::Mat img_bgr = cv::imread(test_img_path);
  fast_style_transfer->detect(img_bgr, style_content);

  if (style_content.flag) cv::imwrite(save_img_path, style_content.mat);
  delete fast_style_transfer;
}

The output is:


3.11 Expand Examples for Image Matting.

3.11 Video Matting using RobustVideoMatting. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/rvm_mobilenetv3_fp32.onnx";
  std::string video_path = "../../../examples/lite/resources/test_lite_rvm_0.mp4";
  std::string output_path = "../../../logs/test_lite_rvm_0.mp4";
  
  auto *rvm = new lite::cv::matting::RobustVideoMatting(onnx_path, 16); // 16 threads
  std::vector<lite::types::MattingContent> contents;
  
  // 1. video matting.
  rvm->detect_video(video_path, output_path, contents);
  
  delete rvm;
}

The output is:


4. Lite.AI.ToolKit API Docs.

4.1 Default Version APIs.

More details of Default Version APIs can be found at api.default.md . For examples, the interface for YoloV5 is:

lite::cv::detection::YoloV5

void detect(const cv::Mat &mat, std::vector<types::Boxf> &detected_boxes, 
            float score_threshold = 0.25f, float iou_threshold = 0.45f,
            unsigned int topk = 100, unsigned int nms_type = NMS::OFFSET);
Expand for ONNXRuntime, MNN, NCNN and TNN version APIs.

4.2 ONNXRuntime Version APIs.

More details of ONNXRuntime Version APIs can be found at api.onnxruntime.md . For examples, the interface for YoloV5 is:

lite::onnxruntime::cv::detection::YoloV5

void detect(const cv::Mat &mat, std::vector<types::Boxf> &detected_boxes, 
            float score_threshold = 0.25f, float iou_threshold = 0.45f,
            unsigned int topk = 100, unsigned int nms_type = NMS::OFFSET);

4.3 MNN Version APIs.

(todoโš ๏ธ: Not implementation now, coming soon.)

lite::mnn::cv::detection::YoloV5

lite::mnn::cv::detection::YoloV4

lite::mnn::cv::detection::YoloV3

lite::mnn::cv::detection::SSD

...

4.4 NCNN Version APIs.

(todoโš ๏ธ: Not implementation now, coming soon.)

lite::ncnn::cv::detection::YoloV5

lite::ncnn::cv::detection::YoloV4

lite::ncnn::cv::detection::YoloV3

lite::ncnn::cv::detection::SSD

...

4.5 TNN Version APIs.

(todoโš ๏ธ: Not implementation now, coming soon.)

lite::tnn::cv::detection::YoloV5

lite::tnn::cv::detection::YoloV4

lite::tnn::cv::detection::YoloV3

lite::tnn::cv::detection::SSD

...

5. Other Docs.

Expand More Details for Other Docs.

5.1 Docs for ONNXRuntime.

5.2 Docs for third_party.

Other build documents for different engines and different targets will be added later.

Library Target Docs
OpenCV mac-x86_64 opencv-mac-x86_64-build.zh.md
OpenCV android-arm opencv-static-android-arm-build.zh.md
onnxruntime mac-x86_64 onnxruntime-mac-x86_64-build.zh.md
onnxruntime android-arm onnxruntime-android-arm-build.zh.md
NCNN mac-x86_64 todoโš ๏ธ
MNN mac-x86_64 todoโš ๏ธ
TNN mac-x86_64 todoโš ๏ธ

6. License.

The code of Lite.AI.ToolKit is released under the GPL-3.0 License.

7. References.

Many thanks to these following projects. All the Lite.AI.ToolKit's models are sourced from these repos.

Expand for More References.

8. Citations.

Cite it as follows if you use Lite.AI.ToolKit.

@misc{lite.ai.toolkit2021,
  title={lite.ai.toolkit: A lite C++ toolkit of awesome AI models.},
  url={https://github.com/DefTruth/lite.ai.toolkit},
  note={Open-source software available at https://github.com/DefTruth/lite.ai.toolkit},
  author={Yan Jun},
  year={2021}
}

9. Notification.

If there is a model you are interested in and want to be supported by Lite.AI.ToolKit๐Ÿš€๐Ÿš€๐ŸŒŸ, you can fork this repo and modify TODOLIST.md, then submit a PR~ I will review this PR and try to support this model in the future, but I donโ€™t make sure this can be done. In addition, MNN, NCNN and TNN support for some models will be added in the future, but due to operator compatibility and some other reasons, it is impossible to ensure that all models supported by ONNXRuntime C++ can run through MNN, NCNN and TNN. So, if you want to use all the models supported by this repo and don't care about the performance gap of 1~2ms, please use the implementation of ONNXRuntime version. ONNXRuntime is the default inference engine for this repo. However, you can follow the steps below if you want to build Lite.AI.ToolKit๐Ÿš€๐Ÿš€๐ŸŒŸ with MNN, NCNN or TNN support (โš ๏ธ NOT STABLE NOW! NOT RECOMMENDED!!!๐Ÿคฆ)

  • change the build.sh with DENABLE_MNN=ON,DENABLE_NCNN=ON or DENABLE_TNN=ON, such as
cd build && cmake \
  -DCMAKE_BUILD_TYPE=MinSizeRel \
  -DINCLUDE_OPENCV=ON \   # Whether to package OpenCV into lite.ai.toolkit, default ON; otherwise, you need to setup OpenCV yourself.
  -DENABLE_MNN=ON \       # Whether to build with MNN,  default OFF, only some models are supported now.
  -DENABLE_NCNN=OFF \     # Whether to build with NCNN, default OFF, only some models are supported now.
  -DENABLE_TNN=OFF \      # Whether to build with TNN,  default OFF, only some models are supported now.
  .. && make -j8
  • use the MNN, NCNN or TNN version interface, see demo, such as
auto *nanodet = new lite::mnn::cv::detection::NanoDet(mnn_path);
auto *nanodet = new lite::tnn::cv::detection::NanoDet(proto_path, model_path);
auto *nanodet = new lite::ncnn::cv::detection::NanoDet(param_path, bin_path);

10. Related projects.