TensorFlow-Powered Vision For Pi-based robot


Introduction

This is a Pi-based robot to implement visual recognition(Inception V3). The TensorFlow-Powered vision can recognize many objects such as people, car, bus, fruits, and so on.

  • Hardware: Raspberry-Pi2, Sony PS3 Eye Camera

    (Available to use Logitech C270 USB camera with Raspberry Pi)

  • Software: TensorFlow(v1.0.1), Jupyter-Notebook

Structure.png

My motivation

I was so curious about excellence of the image recognition with TensorFlow on Raspberry Pi. Also, the Jupyter notebook is very convenient to instantly code as a quick prototype. So, in terms of error rate of the image classification, Inception V3(3.46%) is more excellent than human(5.1%) whereas raspberry pi's processing speed is very slow compare to my laptop.

(Table: Jefree Dean's Keynote @Google Brain).

Chart_IR.png

  • Schematic diagram of Inception-v3

InceptionV3.png

Requirements and Installation

  • Install a webcam driver on your Rapsberry Pi.
sudo apt-get install fswebcam
  • Test your webcam.
fswebcam test.jpg

Quick Start

  • You should install both TensorFlow(v1.0.1) and Jupyter notebook on your Raspberry Pi.

  • First, clone the TensorFlow-Powered_Robot_Vision git repository here. This can be accomplished by:

cd /home/pi/Documents
git clone https://github.com/leehaesung/TensorFlow-Powered_Robot_Vision.git

next, cd into the newly created directory:

cd TensorFlow-Powered_Robot_Vision

Drive your jupyter notebook on your Raspberry Pi.

jupyter-notebook

The pre-trained data(inception_v3.ckpt) will automatically download when driving the Jupyter notebook. (Where: /pi/home/Documents/datasets/inception)

Source Codes

Results of Object Recognition

  • Wow! The result is really awessome!!

RecognitionResult.png

Reference