/CarND-Capstone-Project

Final project of the Self-Driving Car Nanodegree (Team Oscar)

Primary LanguageJupyter Notebook

Capstone Project | Team OSCAR

Team Members:

This is the project repo for the final project of the Udacity Self-Driving Car Nanodegree: Programming a Real Self-Driving Car.

Overview

This project is the culmination of five team members who share a bold vision, to create safe autonomous vehicles the world over.

Meet Carla

The Udacity self-driving car Carla

To learn more about Carla, see here

Using the Robot Operating System (ROS), each team member has developed and maintained a core component of the infrastructure that is demanded by a truly autonomous vehicle. The three core components of any good robot are the following:

  • Perception Sensing the environment to perceive obstacles, traffic hazards as well as traffic lights and road signs
  • Planning Route planning to a given goal state using data from localisation, perception and environment maps
  • Control Actualising trajectories formed as part of planning, in order actuate the vehicle, through steering, throttle and brake commands

ROS Node Architecture Node architecture

Node Details

Traffic Light Recognition

Using information from the vehicle's current pose as well as sensing the environment through raw camera data, we are able to detect the presence of traffic lights and perform recognition to determine their current state. This sensing informs downstream ROS nodes whether the vehicle should drive, stop or slow down.

Sensing is performed by two independent models, one for the detection of objects, in this case - traffic lights. The other model takes the output of the first, and classifies the traffic lights according to their state, e.g. green, yellow or red. This two-prongued approach to recognition provides a robust detection model in case of failure, as well as being "hot-swappable" when improved models are available.

The detection module uses the MobileNet Single-Shot-Detector from tensorflow's model-zoo. This model proved sufficently accurate, within the given timing constraints. A full evaluation and comparison to other models can be found in this notebook.

Classification of traffic lights is performed by a KaNet deep neural network. The KaNet is unique because it allows additional independent learning through layer divergence of target variables. It is also extremely fast for inference - a necessary requirement for real-time recognition.

KaNet model

Training Training the KaNet model is performed like any other multi-class classification problem, using 1 of K encoding and cross-entropy loss. (For more information on training the KaNet model, see the notebook)

Installation

  • Be sure that your workstation is running Ubuntu 16.04 Xenial Xerus or Ubuntu 14.04 Trusty Tahir. Ubuntu downloads can be found here.

  • If using a Virtual Machine to install Ubuntu, use the following configuration as minimum:

    • 2 CPU
    • 2 GB system memory
    • 25 GB of free hard drive space

    The Udacity provided virtual machine has ROS and Dataspeed DBW already installed, so you can skip the next two steps if you are using this.

  • Follow these instructions to install ROS

  • Dataspeed DBW

  • Download the Udacity Simulator.

Usage

  1. Clone the project repository
git clone https://github.com/udacity/CarND-Capstone.git
  1. Install python dependencies
cd CarND-Capstone
pip install -r requirements.txt
  1. Make and run styx
cd ros
catkin_make
source devel/setup.sh
roslaunch launch/styx.launch
  1. Run the simulator

Real world testing

  1. Download training bag that was recorded on the Udacity self-driving car
  2. Unzip the file
unzip traffic_light_bag_files.zip
  1. Play the bag file
rosbag play -l traffic_light_bag_files/loop_with_traffic_light.bag
  1. Launch your project in site mode
cd CarND-Capstone/ros
roslaunch launch/site.launch