/object-detection-on-raspberry-pi

A real-time object detection model on Raspberry Pi 4

Primary LanguagePythonMIT LicenseMIT

Object detection model on Raspberry Pi 4

A real-time object detection model on Raspberry Pi 4 using TensorFlow module on Python

License: MIT Static Badge Static Badge


Table of contents:

Introduction

This project uses TensorFlow Lite with Python on a Raspberry Pi to perform real-time object detection using images streamed from the Pi Camera. It draws a bounding box around each detected object in the camera preview (when the object score is above a given threshold).

In this project, transfer learning is used which allows you to use pre-trained models. TensorFlow provides pre-trained models for common use cases. EfficientDet-Lite0 is the model that was used in the project. Check out the Prerequisities for the installation of the needed models.

Coral USB Accelerator was not used in the project however, the source code supports the Coral Accelerator. Check out the Usage.

Prerequisities

Important

The project was written/tested on Raspberry OS 32-bit Bullseye. The tflite-support library only supports Python 3.7 - 3.9 at this moment. Bullseye comes with Python 3.9 as default. If you're using Bookworm or other OS releases, I recommend you check your Python version and be sure tflite-support supports the needed sub-modules for this project.

Important

Using a Python virtual environment is highly recommended for the usage. Installing packages with sudo pip will install packages globally, which may break some system tools. On the other hand, virtualenv avoids the need to install Python packages globally.

To install the requirements, first create a Python virtual environment:

$ python3 -m venv ENV_DIR

ENV_DIR should be a non-existent directory. The directory can have any name, but to keep these instructions simple, I will assume you have created your virtualenv in a directory called venv (e.g. with python3 -m venv tflite).

To work in your virtualenv, you activate it:

$ source ./tflite/bin/activate

Also, you can get out of the virtualenv by deactivating it:

(tflite)$ deactivate
$ 

In the virtual environment you just created, clone the repository and type in the directory:

(tflite)$ sh setup.sh

This will upgrade your packages and pip then, install the required Python modules along with the pre-trained models.

Usage

Important

CSI camera module is used as default in the source code. If you're using a WebCam, modify the subprogram call in the main.py on detect.py.

If you're using a WebCam, change the subprogram call in main.py to:

# detect(True, DISPLAY_WIDTH, DISPLAY_HEIGHT, THREAD_NUM, False)
detect(False, DISPLAY_WIDTH, DISPLAY_HEIGHT, THREAD_NUM, False)

To run the project, change the directory to ./src and type:

(tflite)$ python3 detect.py

Note

If you are getting an error like:

ImportError: /lib/aarch64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.29' not found (required by ~/.local/lib/python3.9/site-packages/tensorflow_lite_support/metadata/cc/python/_pywrap_metadata_version.so)

Downgrade your tflite-support from version 0.4.4 (current) to 0.4.3 using the following command:

(tflite)$ python -m pip install --upgrade tflite-support==0.4.3


taken on this project

You should see the camera feed appear on the monitor attached to your Raspberry Pi. Put some objects in front of the camera, like a coffee mug or keyboard, and you'll see boxes drawn around those that the model recognizes, including the label and score for each. It also prints the number of frames per second (FPS) at the top-left corner of the screen. As the pipeline contains some processes other than model inference, including visualizing the detection results, you can expect a higher FPS if your inference pipeline runs in headless mode without visualization.

Speed up model inference (with Coral USB Accelerator)

If you want to significantly speed up the inference time, you can attach an Coral USB Accelerator—a USB accessory that adds the Edge TPU ML accelerator to any Linux-based system.

If you have a Coral USB Accelerator, you can run the sample with it enabled:

  1. First, be sure you have completed the USB Accelerator setup instructions.

  2. Run the object detection script using the EdgeTPU TFLite model and enable the EdgeTPU option. Be noted that the EdgeTPU requires a specific TFLite model that is different from the one used above. Change the subprogram call in main.py to:

# detect(False, DISPLAY_WIDTH, DISPLAY_HEIGHT, THREAD_NUM, False)
detect(False, DISPLAY_WIDTH, DISPLAY_HEIGHT, THREAD_NUM, True)

Then run:

(tflite)$ python3 detect.py

You should see significantly faster inference speeds.