- Chris Kalle (bdschrisk) | Team Lead, Perception | Email: ontologia@chriskalle.com
- Ralph Fehrer (fera0013) | Perception | Email: ralphfehrer@gmail.com
- Hideto Kimuta (HidetoKimura) | Systems integration | Email: hidecchim2r@gmail.com
- Carlos Arreaza (carreaza) | Behavioral planning | Email: arreaza.c@gmail.com
- Moe Elsadig (moe-elsadig) | Behavioral planning | Email: m.da7th@gmail.com
This is the project repo for the final project of the Udacity Self-Driving Car Nanodegree: Programming a Real Self-Driving Car.
This project is the culmination of five team members who share a bold vision, to create safe autonomous vehicles the world over.
To learn more about Carla, see here
Using the Robot Operating System (ROS), each team member has developed and maintained a core component of the infrastructure that is demanded by a truly autonomous vehicle. The three core components of any good robot are the following:
- Perception Sensing the environment to perceive obstacles, traffic hazards as well as traffic lights and road signs
- Planning Route planning to a given goal state using data from localisation, perception and environment maps
- Control Actualising trajectories formed as part of planning, in order actuate the vehicle, through steering, throttle and brake commands
Using information from the vehicle's current pose as well as sensing the environment through raw camera data, we are able to detect the presence of traffic lights and perform recognition to determine their current state. This sensing informs downstream ROS nodes whether the vehicle should drive, stop or slow down.
Sensing is performed by two independent models, one for the detection of objects, in this case - traffic lights. The other model takes the output of the first, and classifies the traffic lights according to their state, e.g. green, yellow or red. This two-prongued approach to recognition provides a robust detection model in case of failure, as well as being "hot-swappable" when improved models are available.
The detection module uses the MobileNet Single-Shot-Detector from tensorflow's model-zoo. This model proved sufficently accurate, within the given timing constraints. A full evaluation and comparison to other models can be found in this notebook.
Classification of traffic lights is performed by a KaNet deep neural network. The KaNet is unique because it allows additional independent learning through layer divergence of target variables. It is also extremely fast for inference - a necessary requirement for real-time recognition.
Training Training the KaNet model is performed like any other multi-class classification problem, using 1 of K encoding and cross-entropy loss. (For more information on training the KaNet model, see the notebook)
-
Be sure that your workstation is running Ubuntu 16.04 Xenial Xerus or Ubuntu 14.04 Trusty Tahir. Ubuntu downloads can be found here.
-
If using a Virtual Machine to install Ubuntu, use the following configuration as minimum:
- 2 CPU
- 2 GB system memory
- 25 GB of free hard drive space
The Udacity provided virtual machine has ROS and Dataspeed DBW already installed, so you can skip the next two steps if you are using this.
-
Follow these instructions to install ROS
- ROS Kinetic if you have Ubuntu 16.04.
- ROS Indigo if you have Ubuntu 14.04.
-
- Use this option to install the SDK on a workstation that already has ROS installed: One Line SDK Install (binary)
-
Download the Udacity Simulator.
- Clone the project repository
git clone https://github.com/udacity/CarND-Capstone.git
- Install python dependencies
cd CarND-Capstone
pip install -r requirements.txt
- Make and run styx
cd ros
catkin_make
source devel/setup.sh
roslaunch launch/styx.launch
- Run the simulator
- Download training bag that was recorded on the Udacity self-driving car
- Unzip the file
unzip traffic_light_bag_files.zip
- Play the bag file
rosbag play -l traffic_light_bag_files/loop_with_traffic_light.bag
- Launch your project in site mode
cd CarND-Capstone/ros
roslaunch launch/site.launch