model-optimizer
There are 19 repositories under model-optimizer topic.
Adlik/Adlik
Adlik: Toolkit for Accelerating Deep Learning Inference
Adlik/model_optimizer_tf
Model optimizer used in Adlik.
dkurt/openvino_pytorch_layers
How to export PyTorch models with unsupported layers to ONNX and then to Intel OpenVINO
bethusaisampath/YOLOs_OpenVINO
Latest YOLO models inferencing using Intel Openvino toolkit
yas-sim/openvino-model-division-and-simple-custom-layer
Demonstrates how to divide a DL model into multiple IR model files (division) and introduce a simplest way to implement a custom layer works with OpenVINO IR models.
bethusaisampath/YOLOv5_Openvino
YOLOv5 inferencing using Intel Openvino toolkit
MrEliptik/Keras_to_TF_NCS2
Keras to Tensorflow test for Neural Compute Stick 2
vuiseng9/openvino-ubuntu
Set up and run OpenVINO in Docker Ubuntu Environment on Intel CPU with Integrated Graphics
likholat/openvino_quantization
This sample shows how to convert TensorFlow model to OpenVINO IR model and how to quantize OpenVINO model.
Otien0/Intel-Edge-AI-Foundation-course-
This includes the basics of AI at the Edge, leverage pre-trained models available with the Intel® Distribution of OpenVINO Toolkit™, convert and optimize other models with the Model Optimizer, and perform inference with the Inference Engine.
tuan-l/rpi-openvino-docker
Dockerfile to build Intel® Distribution of OpenVINO™ Toolkit docker image for Raspberry Pi
Develop-Packt/Using-OpenVINO-and-OpenCV
Explore the OpenVINO toolkit, focusing on components like model zoo, inference engine, and model optimizer, and how they can be used to perform deep learning and computer vision tasks.
docongminh/face-recognition-serving
Serving Face Detection and Recognition Based on arc-face
mrb987/autooptimizer
autooptimizer is a python package for optimize machine learning algorithms.
underflow101/ai-zipper
ai-zipper offers numerous AI model compression methods, also it is easy to embed into your own source code
Valderas7/TFM-MovidiusNCS
Trabajo Fin de Máster: Estudio comparativo de un clasificador de imágenes en Raspberry Pi, de forma que se compara el tiempo de la inferencia en la Raspberry Pi con y sin el Neural Compute Stick (NCS). También se estudia como la complejidad de una red neuronal repercute en el tiempo de inferencia y se analiza si los tiempos obtenidos con el NCS en la Raspberry Pi se igualan a los conseguidos por la CPU del portátil y a los de una GPU de Google Colab.
KnightWhoSayNi/openvino-tf-to-ir
Dockerfile for converting a frozen Tensorflow model to OpenVINO™ Intermediate Representation (IR) using Model Optimizer (MO)
yvgupta03/AI_MobileNet_Image-Classification
AI based image classification inspired MobileNet V2 architecture by implementing changes in base architecture and details about using it as a quick response model (proposition) for rapid application as well as comparing it with other models for the same application.