/CarND-Capstone

This is the project repo for the final project of the Udacity Self-Driving Car Nanodegree

Primary LanguagePythonMIT LicenseMIT

Final Project - System Integration

This is the project repo for the final project of the Udacity Self-Driving Car Nanodegree: Programming a Real Self-Driving Car. For more information about the project, see the project introduction here.

Simulator run

The team

It is a solo submission by:

Implementation details

Software architecture

The following is a system architecture diagram showing the ROS nodes and topics used in the project.

Software architecture

Traffic lights detection

For detecting traffic lights from a camera feed a pre-trained on the COCO dataset model ssdlite_mobilenet_v2_coco has been taken from the Tensorflow detection model zoo. The model was selected based on the processing speed as being the fastest among the latest listed. Although the frozen inference graphs were generated using the v1.8.0 release version of Tensorflow and the project required version v1.3.0, the model appeared to be compatible with that version.

Detected traffic light

Traffic light color identification

Once the model finds the traffic lights and provides you the boundary boxes, the next step is to crop the traffic light images from the scene based on those boxes and identify the color. The approach is entirely based on image processing:

  1. Convert the image into LAB color space and isolate the L channel. Good support material can be found here.

LAB color space channel `l`

  1. Split the traffic light cropped image onto three equal segments - upper, middle, and lower corresponding to read, yellow, and green lights respectively.

Upper segment

Middle segment

Lower segment

  1. To identify the color, we need to find out which segment is brighter. Thanks to the LAB color space, L channel gives us exactly that information. All we need to do is to find the sum of all pixels in each of the three segments. The highest score gives us the traffic light color.

The light is GREEN

In the real scenario several filtering methods are applied to give a more reliable estimate.

  1. Among all the lights detected in a frame, the one with the highest confidence score is selected.
  2. The respected lights' boundary box is verified based on the aspect ratio. If it doesn't fit into our threshold, that means the image will not be correctly cropped and the color may be wrongly identified.
  3. Gamma correction was used to enhance too bright images at every second frame.

traffic_light_training.bag

loop_with_traffic_light.bag

just_traffic_light.bag

Traffic lights detection v.2.0

Although the results with the ssdlite_mobilenet_v2_coco model were quite satisfactory, it didn't perform good enough on the traffic_light_training.bag bagfile. To improve the performance, another model was used (ssd_mobilenet_v1_coco_2017_11_17) that was retrained on a dataset kindly shared by one of the Nanodegree's alumni. It was a great help finding a project of another student with a GitHub nickname coldKnight explaining how to re-train the models. But due to the project's requirement to use TensorFlow 1.3 the steps described could not be directly followed since the research models and the tools required to train the models for TensorFlow 1.3 are kindly removed from the earlier releases. The version supported on the date of this writing was 1.8.0.

There are two ways one could still get the older version of the tensorflow/models repo:

  1. On archive.org:
$ wget https://archive.org/download/github.com-tensorflow-models_-_2017-10-05_18-42-08/tensorflow-models_-_2017-10-05_18-42-08.bundle
$ git clone tensorflow-models_-_2017-10-05_18-42-08.bundle -b master
$ mv -r tensorflow-models_-_2017-10-05_18-42-08 ~/tensorflow/models
  1. By knowing the commit's hash:
$ cd ~/tensorflow/
$ git clone https://github.com/tensorflow/models.git
$ cd models
$ git checkout edcf29f

After that you need to follow a simple installation process described here. In case od doubt you can always find the instruction inside your folder: research/object_detection/g3doc/installation.md.

The last step described there is to test the installation. If it fales, please perform two more simple steps:

$ python setup.py build
$ python setup.py install

At this point the test script should pass.

The result after re-training the model:

traffic_light_training.bag

Native Installation

  • Be sure that your workstation is running Ubuntu 16.04 Xenial Xerus or Ubuntu 14.04 Trusty Tahir. Ubuntu downloads can be found here.

  • If using a Virtual Machine to install Ubuntu, use the following configuration as minimum:

    • 2 CPU
    • 2 GB system memory
    • 25 GB of free hard drive space

    The Udacity provided virtual machine has ROS and Dataspeed DBW already installed, so you can skip the next two steps if you are using this.

  • Follow these instructions to install ROS

  • Dataspeed DBW

  • Download the Udacity Simulator.

Docker Installation

Install Docker

Build the docker container

docker build . -t capstone

Run the docker file

docker run -p 4567:4567 -v $PWD:/capstone -v /tmp/log:/root/.ros/ --rm -it capstone

Port Forwarding

To set up port forwarding, please refer to the instructions from term 2

Usage

  1. Clone the project repository
git clone https://github.com/udacity/CarND-Capstone.git
  1. Install python dependencies
cd CarND-Capstone
pip install -r requirements.txt
  1. Make and run styx
cd ros
catkin_make
source devel/setup.sh
roslaunch launch/styx.launch
  1. Run the simulator

Real world testing

  1. Download training bag that was recorded on the Udacity self-driving car.
  2. Unzip the file
unzip traffic_light_bag_file.zip
  1. Play the bag file
rosbag play -l traffic_light_bag_file/traffic_light_training.bag
  1. Launch your project in site mode
cd CarND-Capstone/ros
roslaunch launch/site.launch
  1. Confirm that traffic light detection works on real life images