In this project, you'll label the pixels of a road in images using a Fully Convolutional Network (FCN). You will need to extract the layers from the existing VGG-16 model and restructure the layers with several techniques like changing fully connected layer to fully convolutional layer, adding skip connections. And you will start to learn how to enhance your classifier's performance with using Intersection Over Union Metric (IOU) and inference optimization.
Here is the link to the orginal repository provided by Udaciy. This repository contains all the code needed to complete the project for the Model Semantic Segmentation course in Udacity's Self-Driving Car Nanodegree.
Example: um
series with training epoch = 60
- Python 3
- TensorFlow
- NumPy
- SciPy
- Kitti Road dataset
- Extract the dataset in the
data
folder. This will create the folderdata_road
with all the training and test images.
- Extract the dataset in the
- Meet the
Prerequisites/Dependencies
- Clone the repo from https://github.com/udacity/CarND-Semantic-Segmentation
- Download the Kitti Road dataset from here. Extract the dataset in the
data
folder. This will create the folderdata_road
with all the training a test images. - Build and run your code.
main.py
will check to make sure you are using GPU - if you don't have a GPU on your system, you can use AWS or another cloud computing platform.
- CarND-Term3-P2-Semantic-Segmentation.ipynb: Jupyter notebook for visualize coding and debugging.
- helper.py: Helper functions for use in
main.py
. - main.py: Main function to extract and restructure layers, training and validating the new classifier, then labelling the pixel of road in test images.
- project_tests.py: Unit test functions for validating each function in
main.py
. - README.md: Writeup for this project, including setup, running instructions and project rubric addressing.
- images: Newest inference images from
runs
folder (all images from the most recent run).
Run the following command to run the project:
python main.py
Note If running this in Jupyter Notebook system messages, such as those regarding test status, may appear in the terminal rather than the notebook.
- The link for the frozen
VGG16
model is hardcoded intohelper.py
. The model can be found here. - The model is not vanilla
VGG16
, but a fully convolutional version, which already contains the 1x1 convolutions to replace the fully connected layers. Please see this post for more information. A summary of additional points, follow. - The original FCN-8s was trained in stages. The authors later uploaded a version that was trained all at once to their GitHub repo. The version in the GitHub repo has one important difference: The outputs of pooling layers 3 and 4 are scaled before they are fed into the 1x1 convolutions. As a result, some students have found that the model learns much better with the scaling layers included. The model may not converge substantially faster, but may reach a higher IoU and accuracy.
- When adding l2-regularization, setting a regularizer in the arguments of the
tf.layers
is not enough. Regularization loss terms must be manually added to your loss function. otherwise regularization is not implemented.
Yes, it does.
Yes, it does.
Yes, it does.
Yes, it does.
Yes, it does.
Yes, it does.
Yes, it does.
Test Results (Learning rate is 0.000001)
Iteration | Epochs | Batch_size | Loss | Time/Epoch (Sec) | Comments |
---|---|---|---|---|---|
1 | 10 | 5 | 0.897 | 54.565 | No restart Jupyter Notebook |
2 | 20 | 5 | 0.558 | 57.618 | No restart Jupyter Notebook |
3 | 30 | 5 | 0.520 | 59.902 | No restart Jupyter Notebook |
4 | 40 | 5 | 0.559 | 54.364 | No restart Jupyter Notebook |
5 | 50 | 5 | 0.237 | 54.467 | Restart Jupyter Notebook |
6 | 60 | 5 | 0.211 | 54.412 | Restart Jupyter Notebook |
Test Results (Learning rate is 0.00001)
Iteration | Epochs | Batch_size | Loss | Time/Epoch (Sec) | Comments |
---|---|---|---|---|---|
1 | 10 | 5 | 0.146 | 54.313 | Restart Jupyter Notebook |
2 | 20 | 5 | 0.071 | 54.182 | Restart Jupyter Notebook |
3 | 30 | 5 | 0.042 | 54.102 | Restart Jupyter Notebook |
4 | 40 | 5 | 0.064 | 54.039 | Restart Jupyter Notebook |
5 | 50 | 5 | 0.027 | 54.017 | Restart Jupyter Notebook |
6 | 60 | 5 | 0.029 | 53.818 | Restart Jupyter Notebook |
Conclusion:
Using learning rate 0.00001 has a faster converge speed than 0.000001.
Compare um
series with training epoch = 10, 30, 60.
um
series with training epoch = 10
um
series with training epoch = 30
um
series with training epoch = 60
Compare umm
series with training epoch = 10, 30, 60.
umm
series with training epoch = 10
umm
series with training epoch = 30
umm
series with training epoch = 60
Compare uu
series with training epoch = 10, 30, 60.
uu
series with training epoch = 10
uu
series with training epoch = 30
uu
series with training epoch = 60
Conclusion:
Higher training epochs can provide better image classification result, epoch = 50 is good enough for this project according to the test result.