Our Senior Design Capstone Project aimed to design a device that will help visually-impaired individuals to read text from print such as books and newspapers.
Our device uses an 8MP Pi Camera connected to a Raspberry Pi 3 that the user would hold and move to scan printed text. Scanned text goes through image processing using OpenCV and is fed into a convolution neural network trained on the EMNIST Balanced classes dataset.
Identified letters are outputted onto our makeshift braille display made from six servos which uses the pigpio library to control the GPIO pins.
Install pip3.
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install python3-pip
Install OpenCV and image libraries.
sudo pip install opencv-contrib-python
sudo apt-get install libhdf5-dev libhdf5-serial-dev libhdf5-100
sudo apt-get install libqtgui4 libqtwebkit4 libqt4-test python3-pyqt5
sudo apt-get install libatlas-base-dev
sudo apt-get install libjasper-dev
sudo apt-get install libgtk-3-dev
Install Tensorflow.
# https://www.raspberrypi.org/magpi/tensorflow-ai-raspberry-pi/
sudo apt install libatlas-base-dev
pip3 install tensorflow
Install pigpio library.
wget abyz.me.uk/rpi/pigpio/pigpio.zip
unzip pigpio.zip
cd PIGPIO
make
sudo make install
Install dependencies.
pip3 install -r model/requirements.txt
Connect the servos to the GPIO pins and connect the Pi camera ribbon to the CSI camera slot.
Enable X11 forwarding when you ssh into Pi.
ssh -Y pi@raspberrypi.local
Start pigpio daemon and then run the program.
sudo pigpiod
python3 letter_extraction_pi.py
The camera will turn on and the servos will automatically start moving on their own.