- Raspberry Pi 4 - 4GB minimum
- Raspberry Pi Camera - V2 minimum
- Micro SD card 16+ GB
- Micro HDMI Cable
- 12" CSI/DSI ribbon for Raspberry Pi Camera (optional, but highly recommended)
-
Change Password
-
Run
sudo raspi-config
and selectInterfacing Options
from the Raspberry Pi Software Configuration Tool’s main menu. Press ENTER. -
Select the Enable Camera option and enable the use of the Camera
-
Select the SSH option and enable the use of remote SSH
-
sudo apt update && sudo apt upgrade -y && sudo apt auto-remove -y
-
sudo apt install -y cmake python3-dev libjpeg-dev libatlas-base-dev raspi-gpio libhdf5-dev python3-smbus
-
python3 -m venv .venv
-
source .venv/bin/activate
-
pip install --upgrade setuptools
-
git clone https://github.com/Jbithell/rpi-lectureTrack
-
cd rpi-lectureTrack
-
wget https://drive.google.com/file/d/1bpXR_sP4FpkzxSWAn3b5LwTYTu_tGi4F/view
- NB This needs work, as it doesn't download off GDrive quite as easily as needed. -
pip install tensorflow-2.2.0-cp37-cp37m-linux_armv7l.whl
-
pip install click picamera pillow smbus
-
python setup.py install
The detect
command will start a PiCamera preview and render detected objects as an overlay. Verify you're able to detect an object before trying to track it.
rpi-lectureTrack detect person
rpi-lectureTrack detect --help
Usage: rpi-lectureTrack detect [OPTIONS] [LABELS]...
rpi-lectureTrack detect [OPTIONS] [LABELS]
LABELS (optional) One or more labels to detect, for example:
$ rpi-lectureTrack detect person book "wine glass"
If no labels are specified, model will detect all labels in this list:
$ rpi-lectureTrack list-labels
Detect command will automatically load the appropriate model
For example, providing "face" as the only label will initalize
FaceSSD_MobileNet_V2 model $ rpi-lectureTrack detect face
Other labels use SSDMobileNetV3 with COCO labels $ rpi-lectureTrack detect
person "wine class" orange
Options:
--loglevel TEXT Run object detection without pan-tilt controls. Pass
--loglevel=DEBUG to inspect FPS.
--edge-tpu Accelerate inferences using Coral USB Edge TPU
--rotation INTEGER PiCamera rotation. If you followed this guide, a
rotation value of 0 is correct.
https://medium.com/@grepLeigh/real-time-object-tracking-
with-tensorflow-raspberry-pi-and-pan-tilt-
hat-2aeaef47e134
--help Show this message and exit.
The following will start a PiCamera preview, render detected objects as an overlay, and track an object's movement whilst sending it out to Crestron
rpi-lectureTrack track
Usage: rpi-lectureTrack track [OPTIONS] [LABEL]
rpi-lectureTrack track [OPTIONS] [LABEL]
LABEL (required, default: person) Exactly one label to detect, for example:
$ rpi-lectureTrack track person
Track command will automatically load the appropriate model
For example, providing "face" will initalize FaceSSD_MobileNet_V2 model
$ rpi-lectureTrack track face
Other labels use SSDMobileNetV3 model with COCO labels
$ rpi-lectureTrack detect orange
Options:
--loglevel TEXT Pass --loglevel=DEBUG to inspect FPS and tracking centroid
X/Y coordinates
--edge-tpu Accelerate inferences using Coral USB Edge TPU
--rotation INTEGER PiCamera rotation. If you followed this guide, a
rotation value of 0 is correct.
https://medium.com/@grepLeigh/real-time-object-tracking-
with-tensorflow-raspberry-pi-and-pan-tilt-
hat-2aeaef47e134
--help Show this message and exit.
rpi-lectureTrack list-labels
The following labels are valid tracking targets.
['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush']
rpi-lectureTrack detect
and rpi-lectureTrack track
perform inferences using this model. Bounding box and class predictions render at roughly 6 FPS on a Raspberry Pi 4.
The model is derived from ssd_mobilenet_v3_small_coco_2019_08_14
in tensorflow/models. I extended the model with an NMS post-processing layer, then converted to a format compatible with TensorFlow 2.x (FlatBuffer).
I scripted the conversion steps in tools/tflite-postprocess-ops-float.sh
.
The MobileNetV3-SSD model in this package was derived from TensorFlow's model zoo, with post-processing ops added.
The PID control scheme in this package was inspired by Adrian Rosebrock tutorial Pan/tilt face tracking with a Raspberry Pi and OpenCV
The original package rpi-lectureTrack by Leigh Johnson and was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template.