/CarND1-Vehicle-Detection-P5

Vehicle detection, see my video demo at https://www.youtube.com/watch?v=fuuFvu4LfqA

Primary LanguageHTML

Vehicle Detection

Jianguo Zhang, June 24, 2017

Udacity - Self-Driving Car NanoDegree


Vehicle Detection Project

The goals / steps of this project are the following:

  • Perform a Histogram of Oriented Gradients (HOG) feature extraction on a labeled training set of images and train a classifier Linear SVM classifier
  • Optionally, you can also apply a color transform and append binned color features, as well as histograms of color, to your HOG feature vector.
  • Note: for those first two steps don't forget to normalize your features and randomize a selection for training and testing.
  • Implement a sliding-window technique and use your trained classifier to search for vehicles in images.
  • Run your pipeline on a video stream (start with the test_video.mp4 and later implement on full project_video.mp4) and create a heat map of recurring detections frame by frame to reject outliers and follow detected vehicles.
  • Estimate a bounding box for vehicles detected.

Rubric Points

Here I will consider the rubric points individually and describe how I addressed each point in my implementation.


Writeup / README

1 . Provide a Writeup / README that includes all the rubric points and how you addressed each one. You can submit your writeup as markdown or pdf. Here is a template writeup for this project you can use as a guide and a starting point.

You're reading it! The program codes can be found in vehicle_detection.ipynb or vehicle_detection.html

Here's a link to vehicle detection_video result. I also try to combine vehicle detection and lane detection, here's a link to my vehicle_and_lane_lines_detection video result. Currently the combined method is too naive, I will improve it in future.

Histogram of Oriented Gradients (HOG)

1. Explain how (and identify where in your code) you extracted HOG features from the training images.

The code for this step is contained in the first code cell 2-6.

I started by reading in all the vehicle and non-vehicle images. Here is an example of the vehicle and non-vehicle classes:

alt text

I then explored different color spaces and different skimage.hog() parameters (orientations, pixels_per_cell, and cells_per_block). I grabbed random images from each of the two classes and displayed them to get a feel for what the skimage.hog() output looks like.

Here is an example using the YCrCb color space and HOG parameters of orientations=8, pixels_per_cell=(8, 8) and cells_per_block=(2, 2):

alt text

2. Explain how you settled on your final choice of HOG parameters.

The code for this part can be found in code cell 7-8.

I tried various combinations of parameters. The parameters I use shows as following:

feature vector length X_train: 6156 Scaled_X_train: 6156
Color Space: YCrCb orient: 9
pixels per cell: 8 cells per block: 2
hog_channel: ALL spatial_size: (16, 16)
hist_bins: 32 ystart: 400
ystop: 656 number of frames: 10
scale: 1.5 xy_overlap: (0.5, 0.5)

3. Describe how (and identify where in your code) you trained a classifier using your selected HOG features (and color features if you used them).

The code for this part is in code cell 9-11.

First, I normalize the combined features, then I trained a linear SVM using LinearSvc, the dataset is randomly split into training and testing set, where the test_size is 0.2.

I also try the automatically parameter tuning using GridSearchCV with parameter list {'kernel':('linear', 'rbf'), 'C':[1, 10]}. Howerver, It takes long time to train although it can get the best parameters.

Sliding Window Search

1. Describe how (and identify where in your code) you implemented a sliding window search. How did you decide what scales to search and how much to overlap windows?

The code for this part contained in code cell 12-19.

This part code is mainly taken from lesson material,the overlap is set to (0.5, 0.5):

alt text

You can see that for this method there are some false positive windows and some cars and not detected. we now try some efficient method. The mehod can both extract features and make predictions. It only has to extract hog features once and then can be sub-sampled to get all of its overlaying windows.

The code for this part can be found in code cell 20-22.

One example like this:

alt text

2. False positive and Heat mapping

The code for this part contained in code cell 24-29.

You can see that for this method there are some false positive windows and some cars and not detected. like the following example:

alt text

We convert the "hot windows" tofind the heat map, the "hot" parts of the map are where the cars are, and using a threshold to find the true positive cars.

Before using the threshold:

alt text

After applying threshold, by imposing a threshold, it can reject areas affected by false positives:

alt text

2. Show some examples of test images to demonstrate how your pipeline is working. What did you do to optimize the performance of your classifier?

Ultimately I searched using YCrCb 3-channel HOG features plus spatially binned color and histograms of color in the feature vector, which provided a nice result. Here are some example images:

alt text

Video Implementation

1. Provide a link to your final video output. Your pipeline should perform reasonably well on the entire project video (somewhat wobbly or unstable bounding boxes are ok as long as you are identifying the vehicles most of the time with minimal false positives.)

Here's a link to vehicle detection_video result

I also try to combine vehicle detection and lane detection, here's a link to my vehicle_and_lane_lines_detection video result. I only deal with the output image of lane lines detection using vehicle detection final_process_image function in code cell 66. But I think my combining method is too naive since the lane lines image with lines drawing may affect the vehicle detecting result.

Here is one combined example:

alt text

2. Describe how (and identify where in your code) you implemented some kind of filter for false positives and some method for combining overlapping bounding boxes.

The code for this part can be found in code cell 30-last.

I recorded the positions of positive detections in each frame of the video. From the positive detections I created a heatmap and then thresholded that map to identify vehicle positions. I then used scipy.ndimage.measurements.label() to identify individual blobs in the heatmap. I then assumed each blob corresponded to a vehicle. I constructed bounding boxes to cover the area of each blob detected.

I define a Detection class to smooth several frames, here I choose 10 frames. Every time I find the heat map of 10 frames, it can be helpful to reject false positive in orderinf list of images. It can be found in the process_each_frame function of code 'cell 34'.


Discussion

1. Briefly discuss any problems / issues you faced in your implementation of this project. Where will your pipeline likely fail? What could you do to make it more robust?

It is hard to detect strong false positive examples. I did not combine multiple scaling method, this may also make the pipline not so robust.

In future, I will try some more advanced real-time method like Faster R-CNN in 2016 and the newest method Mask R-CNN in 2017.