-# Optimized Inference at the Edge with Intel® Tools and Technologies This workshop will walk you through a computer vision workflow using the latest Intel® technologies and comprehensive toolkits including support for deep learning algorithms that help accelerate smart video applications. You will learn how to optimize and improve performance with and without external accelerators and utilize tools to help you identify the best hardware configuration for your needs. This workshop will also outline the various frameworks and topologies supported by Intel® accelerator tools.
⚠️ For the in-class training, the hardware and software setup part has already been done on the workshop hardware. In-class training participants should directly move to Workshop Agenda section.
In order to use this workshop content, you will need to setup your hardware and install OpenVINO™ toolkit for infering your computer vision application.
The hardware requirements are mentioned in the System Requirement section of the install guide
These labs have been validated on Ubuntu 16.04 OS.
Use steps described in the install guide to install OpenVINO™ toolkit as well as MediaSDK and OpenCL* mentioned in the Post-Installation section of the guide.
sudo apt install libgflags-dev
sudo apt install python3-pip
pip3 install -r /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/requirements_caffe.txt
Compile in-built samples in OpenVINO™ toolkit
cd /opt/intel/computer_vision_sdk/deployment_tools/inference_engine/samples/
sudo mkdir build && cd build
sudo cmake –DCMAKE_BUILD_TYPE=Debug ..
sudo make
- Install python3 (version 3.5.2 or newer)
- Install yaml and requests modules with command:
sudo -E pip3 install pyyaml requests
- Run model downloader script to download example deep learning models
cd /opt/intel/computer_vision_sdk/deployment_tools/model_downloader
sudo ./downloader.py
-
Intel Smart Video/Computer Vision Tools Overview
-
Basic End to End Object Detection Example
-
Hardware Heterogeneity
- Lab - Hardware Heterogeneity
-
HW Acceleration with Intel® Movidius™ Neural Compute Stick
-
FPGA Inference Accelerator
-
Optimization Tools and Techniques
-
Advanced Video Analytics
- Lab - Advanced Video Analytics