This is the project repo for the final project of the Udacity Self-Driving Car Nanodegree: Programming a Real Self-Driving Car. For more information about the project, see the project introduction here.
Name | Udacity Account Email |
---|---|
hesham abdelfattah | hesham.abdelfattah81@gmail.com |
Naveen Jagadish | naveen.lsu@gmail.com |
Frederic Liu | liufubo4213@126.com |
This node publishes a fixed number (200 in this implementation) ahead of the vehcile with the correct target velocities. If an upcoming stop light is detected, the velocity of the waypoints will be adjusted and the car decelerates or accelerates depending on the light state.
The implementation mainly follow the classroom's solion, subscribing to /current_pose , /base_waypoints and /traffic_waypoints, using KDTree to get the closest point, and decelate when close to red light. Finally publishes /final_waypoints.
This node represents a drive by wire controller. It receives /twist_cmd and current velocity, calculates throttle, brake and steering, and finaly published them to vehicle.
This controller is responsible for acceleration and steering. The acceleration is calculated via PID controller. Steering is calculated using YawController which simply calculates needed angle to keep needed velocity.
This node is responsible for detecting upcoming traffic lights and classify their states (red, yellow, green).
For the classification model, we take reference of some previous teams of this projects. Finally we choose Tensorflow's Object Detection API to train the traffic-light classification model, for its easily training and frendly using.
Training image dataset could be downloaded here.
The workflow of training the classification model with Object Detection API is as followings:
sudo apt-get install protobuf-compiler
sudo pip install pillow
sudo pip install lxml
sudo pip install jupyter
sudo pip install matplotlib
# From root directory
protoc object_detection/protos/*.proto --python_out=.
# From root directory
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
# From root directory
curl http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_coco_2017_11_17.tar.gz | tar -xv -C model/ --strip 1
python object_detection/dataset_tools/create_pascal_tf_record.py --data_dir=data/sim_training_data/sim_data_capture --output_path=sim_data.record
python object_detection/dataset_tools/create_pascal_tf_record.py --data_dir=data/real_training_data/real_data_capture --output_path=real_data.record
python object_detection/train.py --pipeline_config_path=config/ssd_mobilenet_v1_coco_sim.config --train_dir=data/sim_training_data/sim_data_capture
python object_detection/train.py --pipeline_config_path=config/ssd_mobilenet_v1_coco_real.config --train_dir=data/real_training_data
Currently in simulator the algorithm could drive car smoothly along the waypoints, stop at stopline when traffic light is red, and could recover from manual mode.