/Intel-Edge-AI-Foundation-course-

This includes the basics of AI at the Edge, leverage pre-trained models available with the Intel® Distribution of OpenVINO Toolkit™, convert and optimize other models with the Model Optimizer, and perform inference with the Inference Engine.

Primary LanguageJupyter Notebook

Intel-Edge-AI-Foundation-course-

This includes the basics of AI at the Edge, leverage pre-trained models available with the Intel® Distribution of OpenVINO Toolkit™, convert and optimize other models with the Model Optimizer, and perform inference with the Inference Engine.

The course is divided into three sections of the core curriculum:

1. Welcome To The Intel Edge AI Foundation

2. Intel Edge AI Foundation Course

3. Certificate of Completion.

Course Structure:

1. This course largely focus on AI at the Edge using the Intel® Distribution of OpenVINO™ Toolkit.

2. Leveraging Pre-Trained Models:

First, we start off with pre-trained models available in the OpenVINO™ Open Model Zoo. Even without needing huge amounts of your own data and costly training, you can deploy powerful models already created for many applications.

3. The Model Optimizer:

Next, is the Model Optimizer, which can take a model you trained in frameworks such as TensorFlow, PyTorch, Caffe and more, and create an Intermediate Representation (IR) optimized for inference with OpenVINO™ and Intel® hardware.

4. The Inference Engine:

Third, is about the Inference Engine, where the actual inference is performed on the IR model.

5. Deploying An Edge App:

Lastly, the course covers some more topics on deploying at the edge, including things like handling input streams, processing model outputs, and the lightweight MQTT architecture used to publish data from your edge models to the web.

What I Built Throughout The Entire Course:

In the project at the end of the course, I built and deploy a People Counter App at the Edge. In the entire project, I was able to:

1.Convert a model to an Intermediate Representation (IR).

2.Use the IR with the Inference Engine.

3.Process the output of the model to gather relevant statistics.

4.Send those statistics to a server, and

5.Perform analysis on both the performance and further use cases of the model.