/computer-vision-raspberrypi

Sample projects for Computer Vision with Raspberry Pi and Movidius Neural Compute Stick

Primary LanguagePython

Computer Vision on Raspberry Pi

by Danh Doan

Introduction

This is a series about developing common Computer Vision projects on Raspberry Pi board. Some of them requires the support of Movidius Neural Compute Stick to boost the performance. OpenVINO toolkit is mainly the development tool that helps optimize the hardware and models to work well with Raspi

The main purpose of this work is to help developers from all levels to gain insights and resources to working with Raspberry Pi for Computer Vision projects. Every sample project is refactored and organized so that it is easily understandable and approachable.

If you have any project ideas and issues with those projects, feel free to comment. It will help improve and enrich the contents. Thanks in advance with my sincere.

Other Computer Vision demos: [link]

Updates

2021, Jan 04:

  • Publish Docker image for OpenCV [link]

2019, Nov 27:

  • Add 012-tflite-object-detection

2019, Nov 15:

2019, Nov 05:

  • Add 008-pi-emotion-recognition
  • Update installation guide to support IECore

2019, Oct 23:

  • Add 004-pi-head-pose-estimation

2019, Oct 16:

  • Add 006-pi-face-verification

2019, Oct 15:

  • Add 003-pi-face-alignment

2019, Oct 12:

  • Add 005-pi-object-detection
  • Add 002-pi-facial-landmark-detection
  • Add 001-pi-face-detection
  • Add 000-show-pi-camera

Sample Projects

  • Test with Pi camera module: [link]

    • Play around with builtin Pi camera module
  • Face Detection with High Accuracy: [link]

    • Develop an accurate and robust Face detector with pretrained SSD model trained from WIDER dataset
  • Facial Keypoint Detection: [link] [demo] [demo]

    • Develop a simple Facial keypoints localization that detect 5 main keypoints of human faces (center eyes, nose tip, and mouth corner
  • Face Alignment: [link]

    • Based on 5 keypoints, align human faces, to support other problem e.g. Face Identification, Face Verification, ...
  • Headpose Estimation: [link] [demo]

    • Estimate Human head pose in Tait-Bryan angles (yaw, pitсh or roll)
  • Human Detection: [link] [demo]

    • Develop an Object detector especially with SSD model

    Notice: human is just an example of objects, any object detection model can be converted to work with this sample project

  • Face Verification: [link]

    Verify Face Identity by Face embeddings

  • Emotion Recognition: [link] [demo]

    Recognize Emotional states of Human faces

  • Car and License Plate Detection: [ongoing]

  • TFLite Object Detection: [link] [demo]

Installation

Follow install.md instructions [link] to install essential packages and modules for working with Raspberry Pi. Installing OpenVINO as a toolkit to the development.

Usage

  1. Clone this repository:

    cd ~

    mkdir workspace && cd workspace

    git clone https://github.com/danhdoan/computer-vision-raspberrypi

  2. Download OpenVINO pretrained-model

    mkdir openvino-models tflite-models

    You can notice a soft symbol link in any projects that maps to this directory. If you want to store it elsewhere, beware of re-map this symbol link. To download a model, just go to the official OpenVINO site from Intel:

    https://download.01.org/opencv/2019/open_model_zoo/R1/

    In this project, models from R1 sub-dir are used. R3 is currently the latest, it also works well with sample code. Just download the FP16 models, they can be applied to Raspberry Pi. In my code, I usually add a postfix -fp16 to clarify this issue. Thus, download a pretrained model and change its name correspondingly.

  3. Run a sample project

    All sample projects have the same argument parser

     usage: <app-name>.py [-h] [--video VIDEO] [--name NAME] [--show]                              
                               [--record] [--flip_hor] [--flip_ver]
     optional arguments:                                                                                    
       -h, --help            show this help message and exit
       --video VIDEO, -v VIDEO
                             Video Streamming link or Path to video source
       --name NAME, -n NAME  Name of video source
       --show, -s            Whether to show the output visualization
       --record, -r          Whether to save the output visualization
       --flip_hor, -fh       horizontally flip video frame
       --flip_ver, -fv       vertically flip video frame
    

    It is implemented in altusi/helper.py, you can customize as you wanted. To see argument list of a sample, e.g. object-detection, just type:

     python3 app-object-detector.py -h
    

References