/CarND-Capstone

Primary LanguageJupyter NotebookMIT LicenseMIT

This is the project repo for the final project of the Udacity Self-Driving Car Nanodegree: Programming a Real Self-Driving Car. For more information about the project, see the project introduction here.

alt text

Team

Name Email
Kevin Sepehri kevinsepehri@gmail.com
Lajos Kamocsay panka.nospam@gmail.com
Mike Challis gardenermike@gmail.com
Rafael Barreto rafaelbarretorb@gmail.com

ROS Nodes

Waypoint Updater

The purpose of this node is to publish a fixed number of waypoints (40) ahead of the vehicle with the correct target velocities, depending on traffic lights and obstacles.

/ros/src/waypoint_updater/waypoint_updater.py

Subscribes to

/current_pose: The current position of the car.

/base_waypoints: One time load of waypoints from the whole track.

/current_velocity: The current velocity of the car.

/traffic_waypoint: Waypoint index of the closest red traffic light.

Not subscribing to obstacle waypoint as it's not yet part of the project

Publishes

/final_waypoints: Total number of waypoints are based on LOOKAHEAD_WPS variable. If the car is approaching a red light it reduces these velocities to come to a stop by the stop line.

Drive By Wire (DBW)

This node subscribes to various topics and controllers to provide appropriate throttle, brake, and steering commands.

/ros/src/twist_controller/dbw_node.py

Subscribes to

/twist_cmd: Twist commands that describe linear and angular velocities

/current_velocity: The current velocity of the car

/vehicle/dbw_enabled: Switching between manual and DBW control

/tl_detector_ready: Boolean that returns true once the traffic light detector is ready. If the detector is not ready the car brakes and sleeps.

Publishes

/vehicle/throttle_cmd: Throttle commands.

/vehicle/brake_cmd: Brake commands.

/vehicle/steering_cmd: Steering commands.

Traffic Light Detection

This node uses Keras MobileNet to detect traffic lights and publishes the upcoming light waypoint along with a boolean for knowing if the model is ready.

/ros/src/tl_detector/tl_detector.py

Subscribes to

/current_pose: The current position of the car.

/base_waypoints: One time load of waypoints from the whole track.

/vehicle/traffic_lights: Location of the traffic light in 3D map space.

/image_color: Color image provided by camera.

Publishes

/tl_detector_ready: Boolean to notify other nodes that the keras model is loaded.

/traffic_waypoint: Waypoint index of the closest red traffic light.

Training

The model was trained with a subset of the Bosch data and simulator images. The data is then augmented and evaluated. The model gets 95.7% accuracy on the combined simulator and bosch data, with 99.7% accuracy on the simulator training data.

alt text

Sample Bosch Training Image

alt text

Sample Camera Bag Image

/traffic-light-detection/train.py

image_size = (224, 224, 3)
batch_size = 16
num_classes = 4
epochs = 96


base_model = MobileNet(
  alpha=0.25,          # adjust down to make model smaller/faster by reducing filter count
  depth_multiplier=1,  # adjust down to make model smaller/faster by reducing resolution per layer
  weights='imagenet',
  #weights=None,
  include_top=False,
  #classes=num_classes,
  input_shape=image_size
)

It also augments the data randomly

# augment data
# rotate up to 2 degrees
image = preprocess.random_rotation(image, 2, row_axis=0, col_axis=1, channel_axis=2)
# randomly shift up to 20%
image = preprocess.random_shift(image, 0.2, 0.2, row_axis=0, col_axis=1, channel_axis=2)
# randomly zoom in up to 20%
image = preprocess.random_zoom(image, (0.8, 0.8), row_axis=0, col_axis=1, channel_axis=2)
#adjust brightness
image = preprocess.random_brightness(image, (0.8, 1.2))
# randomly flip horizontally
if np.random.random() > 0.5:
   image = preprocess.flip_axis(image, 1)

Performance Tuning

We ran into performance issues between the simulator and ROS so we tried the following optimizations:

  • Reducing lookahead waypoints from 100 to 40
  • Only send images to the model if the car is 100 to 25 waypoints from the light

Install

Please use one of the two installation options, either native or docker installation.

Native Installation

  • Be sure that your workstation is running Ubuntu 16.04 Xenial Xerus or Ubuntu 14.04 Trusty Tahir. Ubuntu downloads can be found here.

  • If using a Virtual Machine to install Ubuntu, use the following configuration as minimum:

    • 2 CPU
    • 2 GB system memory
    • 25 GB of free hard drive space

    The Udacity provided virtual machine has ROS and Dataspeed DBW already installed, so you can skip the next two steps if you are using this.

  • Follow these instructions to install ROS

  • Dataspeed DBW

  • Download the Udacity Simulator.

Docker Installation

Install Docker

Build the docker container

docker build . -t capstone

Run the docker file

docker run -p 4567:4567 -v $PWD:/capstone -v /tmp/log:/root/.ros/ --rm -it capstone

Port Forwarding

To set up port forwarding, please refer to the instructions from term 2

Usage

  1. Clone the project repository
git clone https://github.com/udacity/CarND-Capstone.git
  1. Install python dependencies
cd CarND-Capstone
pip install -r requirements.txt
  1. Make and run styx
cd ros
catkin_make
source devel/setup.sh
roslaunch launch/styx.launch
  1. Run the simulator

Real world testing

  1. Download training bag that was recorded on the Udacity self-driving car.
  2. Unzip the file
unzip traffic_light_bag_file.zip
  1. Play the bag file
rosbag play -l traffic_light_bag_file/traffic_light_training.bag
  1. Launch your project in site mode
cd CarND-Capstone/ros
roslaunch launch/site.launch
  1. Confirm that traffic light detection works on real life images