OpenCV in Python, with multiprocessing and shared memory!
A collection of small codes for processing live video (like from a webcam) with OpenCV. Image data in memory (NumPy arrays) are shared by multiple processes using numpy-sharedmem Python module. Parallel processing workflow is implemented with MPipe.
First thing, you'll need OpenCV Python bindings on your system. Get it using YUM:
yum install opencv-python
or Aptitude:
aptitude install python-opencv
or Homebrew (on OS X):
brew tap homebrew/science brew install opencv
Okay, now get the project code:
git clone https://github.com/vmlaker/sherlock
Run make:
cd sherlock make
For starters, let's test your OpenCV Python bindings.
This command shows live view from the first video device
(i.e. /dev/video0
) for a duration of 8 seconds:
make playcv2
The following cases demonstrate simplified motion detection. Each iteraton increases in complexity with cumulative changes intended to enhance performance.
Run one process:
make diffavg1
Add parallel processing:
make diffavg2
Add shared memory:
make diffavg3
Add filtering:
make diffavg4
Processing algorithm is a run-of-the-mill foreground/background segmentation using scene average.
Consider profiling resource usage by running with time
command.
Objects in the video stream are detected using Haar feature-based
cascade classifiers. Active classifiers are listed in
src/util/cascade.py
file. By default, these are
vanilla classifiers shipped with OpenCV distribution.
You can edit this file to activate (or deactivate) classifiers,
change search paths, add your own custom classifiers,
and configure global object detection parameters.
Run face detection serially:
make object1
Run face detection in parallel:
make object2