Optimized Inference at the Edge with Intel® Tools and Technologies

This workshop will walk you through a computer vision workflow using the latest Intel® technologies and comprehensive toolkits including support for deep learning algorithms that help accelerate smart video applications. You will learn how to optimize and improve performance with and without external accelerators and utilize tools to help you identify the best hardware configuration for your needs. This workshop will also outline the various frameworks and topologies supported by Intel® accelerator tools.

⚠️ This workshop content has been validated with Intel® Distribution of OpenVINO™ toolkit version R1 2020 (openvino_toolkit_2020.1.023).

How to Get Started

⚠️ For the in-class training, the hardware and software setup part has already been done on the workshop hardware. In-class training participants should directly move to Workshop Agenda section.

In order to use this workshop content, you will need to setup your hardware and install the Intel® Distribution of OpenVINO™ toolkit for infering your computer vision application.

1. Hardware requirements

The hardware requirements are mentioned in the System Requirement section of the install guide

2. Operating System

These labs have been validated on Ubuntu* 18.04 OS.

3. Software installation steps

a). Install Intel® Distribution of OpenVINO™ toolkit

Use steps described in the install guide to install the Intel® Distribution of OpenVINO™ toolkit, configure Model Optimizer, run the demos, additional steps to install Intel® Media SDK and OpenCL™ mentioned in the the guide.

b). Install required packages

sudo apt install git
sudo apt install python3-pip
sudo apt install libgflags-dev
sudo pip3 install opencv-python
sudo pip3 install cogapp

c). Run the demo scipts and compile samples

Run demo scripts (any one of them or both if you want to both the demos) which will generate the folder $HOME/inference_engine_samples with the current Intel® Distribution of OpenVINO™ toolkit built.

cd /opt/intel/openvino/deployment_tools/demo
./demo_squeezenet_download_convert_run.sh
./demo_security_barrier_camera.sh

sudo chown -R username.username $HOME/inference_engine_samples_build
cd $HOME/inference_engine_samples_build
make

sudo chown -R username.username $HOME/inference_engine_demos_build
cd $HOME/inference_engine_demos_build
make

d). Download models using model downloader scripts in Intel® Distribution of OpenVINO™ toolkit installed folder

  • Install python3 (version 3.5.2 or newer)
  • Install yaml and requests modules with command:
cd /opt/intel/openvino/deployment_tools/tools/model_downloader	
python3 -mpip install --user -r ./requirements.in
  • Run model downloader script to download example deep learning models
sudo python3 downloader.py --name mobilenet-ssd,ssd300,ssd512,squeezenet1.1,face-detection-retail-0004,age-gender-recognition-retail-0013,head-pose-estimation-adas-0001,emotions-recognition-retail-0003,facial-landmarks-35-adas-0002

e). Install Intel® VTune™ Amplifier on development machine

Follow the guide to install Intel® VTune™ Amplifier on your development machine.

f). Install Jupyter Notebook and Opencv

Install Jupyter Notebook using below command

pip3 install jupyter

Install Opencv2 using below command

pip3 install opencv-python

g). Run the Jupyter Notebook

  1. Run the Jupyter Notebook
	$ jupyter notebook
  1. It opens in default browser, locate the required jupyter notebook (.ipynb) file and double click on it to open and run.

Workshop Agenda

Disclaimer

Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others