The Detectionator is a fairly minimal Raspberry Pi camera build for taking pictures of particular objects like animals or people. The Detectionator incorporates Exif geolocation metadata in the photographs it takes. With the help of my AutoUpload project, pictures are automatically offloaded from the local storage to cloud-based storage such as S3-compatible object storage with Rclone or to Immich with the Immich CLI. The photos are then removed from local storage. This repository contains documentation, configuration files, and the Python program for the Detectionator. The project is largely based off of the tensorflow example in the picamera2 project and includes the same models.
The Detectionator is based off of the Raspberry Pi 5 and the Camera Module 3. It incorporates a GPS HAT for the Raspberry Pi to obtain the geolocation data to embed in the photographs.
ℹ️
|
The GPS HAT requires soldering on a GPIO header. |
-
Raspberry Pi 5 Model B (4 GB RAM or better)
-
Camera Enclosure Case for Raspberry Pi Camera Module 3/V1/V2 and Arducam 16MP/64MP Camera
-
A CR1220 Battery for the RTC.
-
A sufficiently large and performant M.2 NVMe SSD. I prefer the SK hynix Gold P31 for lower-power and heat situations.
-
Alternatively, a sufficiently large and performant microSD card can be used instead of NVMe storage. The 128GB Samsung Pro Ultimate and 128GB Samsung Pro Endurance are two good options.
-
Extended M2.5 Standoffs for Pi HATs (Raspberry Pi 5) - Pack of 4 This is 4 17mm M2.5 Hex socket-socket standoffs and 8 6mm M2.5 screws.
-
Booster Header This header has a standard 8.5mm height with 5mm tall pins.
💡
|
The Adafruit Ultimate GPS HAT’s cutout for the camera flex cable doesn’t line up with the connectors on the Raspberry Pi 5. Use a drill to widen the cutout towards the edge of the board to allow the flex cable enough room to fit. |
-
Install the 64-bit full version of Raspberry Pi OS to a microSD card. This project has been tested with Raspberry Pi OS 5 based on Debian Bookworm.
-
Insert the microSD card into the Raspberry Pi.
-
Boot the Raspberry Pi.
-
Create the
detectionator
user account. The detectionator service is intended to be run under a dedicated user account. Since the autoupload mechanism for Immich CLI relies on running containers, the instructions here will configure this user account to run containers.-
Create a
detectionator
system group.sudo groupadd --gid 818 --system detectionator
-
Create a primary user account named
detectionator
.sudo useradd \ --add-subids-for-system \ --comment "Primary account taking photographs with object-detection" \ --create-home \ --gid detectionator \ --groups render,systemd-journal,video \ --shell /usr/sbin/nologin \ --system \ --uid 818 \ detectionator
💡The
--btrfs-subvolume-home
flag can be used to create the user’s home directory on a separate Btrfs subvolume. -
Verify that the new
detectionator
user has entries in/etc/subuid
and/etc/subgid
. If for some reason, there are no subuid and subgid ranges for the user, follow these steps. I don’t know why this happens, but it does sometimes.ℹ️These commands use the fish shell because I can never remember how to math in Bash.[1]
-
Calculate the first value for the next subuid allotment.
If
/etc/subuid
is empty, use 100,000 as the initial value.set new_subuid 100000
Otherwise, use the following function to calculate the next available subuid range.
set new_subuid (math (tail -1 /etc/subuid | awk -F ":" '{print $2}') + 65536)
-
Calculate the first value for the next subuid allotment.
If
/etc/subgid
is empty, use 100,000 as the initial value.set new_subgid 100000
Otherwise, use the following function to calculate the next available subgid range.
set new_subgid (math (tail -1 /etc/subgid | awk -F ":" '{print $2}') + 65536)
-
Configure the
core
user with the calculated subuid and subgid ranges.sudo usermod \ --add-subuids $new_subuid-(math $new_subuid + 65535) \ --add-subgids $new_subgid-(math $new_subgid + 65535) \ detectionator
-
Automatically start the core user’s session.
sudo loginctl enable-linger detectionator
-
Open a shell as the
detectionator
user with the following command. I prefer the fish shell, so I use that here, but substitute Bash, ZSH, etc. per your preference.sudo -H -u detectionator fish -c 'cd; fish'
-
Configure the
XDG_RUNTIME_DIR
environment variable for the user in order for sockets to be found correctly.set -Ux XDG_RUNTIME_DIR /run/user/(id -u)
-
-
Install just by following the instructions in the installation section.
-
Follow the instructions to configure the storage service and autoupload systemd units in the AutoUpload README. This should automatically upload photos in the
/home/detectionator/Pictures
directory. The commands to enable the units should look similar to the following. Use the user unit, for Immich since it is running a container.- Immich
sudo -H -u detectionator bash -c 'systemctl --user enable --now autoupload-immich@$(systemd-escape --path ~/Pictures).timer'
- Rclone
-
sudo systemctl enable --now autoupload-rclone@$(systemd-escape --path /home/detectionator/Pictures).timer
-
For security, be sure to disable password-based SSH authentication. After your public key has been added to the
~/.ssh/authorized_keys
file on the Pi Camera, this can be configured in the/etc/ssh/sshd_config
file. You can follow the instructions in my OpenSSH Config repository to accomplish this and a few other optimizations. -
Update the package lists.
sudo apt-get update
-
Upgrade everything.
sudo apt-get --yes full-upgrade
-
Make the
~/Projects
directory.mkdir --parents ~/Projects
-
Clone this project’s repository to the
~/Projects
directory.git -C ~/Projects clone https://github.com/jwillikers/detectionator.git
-
Change to the project’s directory.
cd ~/Projects/detectionator
-
Set up the environment with
just init
. This will install dependencies and initialize the virtual environment. It also installs a special udev rule for the Adafruit Ultimate GPS to give it a static device name.just init
-
Enable the serial port hardware and better PCIe speeds in
config.txt
./boot/firmware/config.txt[all] dtparam=pciex1_gen=3 dtoverlay=pps-gpio,gpiopin=4 dtparam=uart0=on # Allow USB devices to pull the maximum current possible. usb_max_current_enable=1 # Recharge the RTC battery. dtparam=rtc_bbat_vchg=3000000 # Reduce clockspeeds and temperature limit to reduce the likelihood of crashes. # Crashes seem particularly common when recording video. #arm_freq = 2000 #core_freq = 900 #v3d_freq = 900 #temp_limit = 80 disable_camera_led = 1
-
-
Ensure that
console=tty1
is in/boot/firmware/cmdline.txt
and notconsole=ttyAMA0
orconsole=serial0
./boot/firmware/cmdline.txtconsole=tty1 root=PARTUUID=c64d4099-02 rootfstype=ext4 fsck.repair=yes rootwait cfg80211.ieee80211_regdom=US
-
Configure the
pps-gpio
module to be loaded.echo 'pps-gpio' | sudo tee /etc/modules-load.d/pps-gpio.conf
-
Configure gpsd to use the GPS HAT. The serial port
ttyAMA0
is used and since the Raspberry Pi 5 has a built-in RTC,pps1
is used instead ofpps0
here./etc/default/gpsd# Devices gpsd should collect to at boot time. # They need to be read/writeable, either by user gpsd or the group dialout. DEVICES="/dev/ttyAMA0 /dev/pps1" # Other options you want to pass to gpsd GPSD_OPTIONS="--nowait" # Automatically hot add/remove USB GPS devices via gpsdctl USBAUTO="true"
-
Configure chrony to use the GPS HAT for time.
/etc/chrony/conf.d/gpsd.conf# set larger delay to allow the NMEA source to overlap with # the other sources and avoid the falseticker status refclock SOCK /run/chrony.ttyAMA10.sock refid GPS precision 1e-1 offset 0.9999 refclock SOCK /run/chrony.pps1.sock refid PPS precision 1e-7
-
Reboot for the new udev rules to take effect.
sudo systemctl reboot
-
Use
just run
to run thedetectionator.py
Python script inside the virtual environment.just run
-
Install and activate the systemd service with
just install
.just install
💡
|
Send the application the kill --signal SIGUSR1 $(pgrep python) |
-
Check the status of the
detectionator.service
unit with thesystemctl status
command.sudo systemctl status detectionator.service
-
Check the logs of the
detectionator.service
unit withjournalctl
.sudo journalctl -xeu detectionator.service
-
Set up unattended upgrades to automatically update the system. I document how to do this in my blog post Unattended Upgrades.
The Raspberry Pi Camera Module 3 supports HDR, but only at a lower resolution.
HDR support has to toggled when detectionator.py
isn’t running.
-
Show the available V4L subdevices.
ls /dev/v4l-subdev* /dev/v4l-subdev0 /dev/v4l-subdev1 /dev/v4l-subdev2 /dev/v4l-subdev3
-
To enable HDR support for the Raspberry Pi Camera Module 3, use the following command on one of the V4L subdevices. In my case, this ended up being
/dev/v4l-subdev2
.just hdr /dev/v4l-subdev2
-
To disable HDR support for the Raspberry Pi Camera Module 3, use this command with the corresponding V4L subdevice.
just hdr /dev/v4l-subdev2 disable
-
Run
just init-dev
to initialize the virtual environment for development. This will install all of the necessary dependencies and the {pre-commit} hooks.just init-dev
-
Run the tests with pytest by running
just test
.just test
-
To update dependencies, run
just update
.just update
-
Use
just --list
to list other available tasks.just --list
The Raspberry Pi Camera Module 3 uses a rolling shutter. Rolling shutter can make object detection less accurate and produce graphical distortions for fast-moving objects. Global shutter would be ideal, but it’s not as easy to find high resolution cameras using this technology for an embedded system. Lower resolution cameras, such as the Raspberry Pi Global Shutter Camera exist, which could be used to improve the object detection of fast moving objects. I might try to use a dual-camera system in the future which could take advantage of the lower-resolution global shutter camera for detections while still capturing pictures with the a higher resolution rolling shutter camera.
-
[] Add better configuration parameter checking, particularly around enum type options.
-
[] Add a configuration parameter to configure the focus range. This could help speed up autofocus.
-
[] Add a configuration parameter to configure the rest time in-between runs of the detection algorithm when there was no detection.
-
[] Add a configuration parameter to control whether or not to refocus when capturing burst shots.
-
✓ Use gpsd.
-
[] Create classes for different data types to better organize things.
-
✓ Cache the GPS data to reduce the time to capture pictures between detections? Use TTLCache from cachetools.
-
✓ Add support for a TOML config file with ConfigArgParse.
-
[] Switch from picamera2 to gstreamer to work with more hardware.
-
[] mypy
-
[] async
-
[] Create a weatherproof enclosure for the camera.
-
[] Add a NixOS configuration and build SD card images.
-
[] Should I be processing the images in grayscale?
The project’s Code of Conduct is available in the Code of Conduct file.
The models are from the picamera2 project’s TensorFlow example, and are likely subject to their own licenses. This repository is licensed under the GPLv3, available in the license file.
© 2024 Jordan Williams