/Udacity_Pytorch_Scholarship_challenge

A scholarship provided by Facebook and Udacity covering concepts behind deep learning and how to build deep learning models using PyTorch. Neural Networks PyTorch CNN, RNN Sentiment Prediction with RNNs Deploying Models Project - Flower Species Identifier Phase 1 is the Challenge Course. The duration of this course is two months, and program participants will receive support from community managers throughout their learning experience, as they become part of a dynamic student community and network of scholars. In Phase 2, the top 300 students (in terms of output and collaboration) from the first phase will earn full scholarships to Udacity’s Deep Learning Nanodegree program, where they’ll cover topics such as: Convolutional and Recurrent Neural Networks, Generative Adversarial Networks, Deployment, and more. Students will use PyTorch, and have access to GPUs to train models faster, as they learn from authorities like Sebastian Thrun, Ian Goodfellow, Jun-Yan Zhu, and Andrew Trask.

Primary LanguageJupyter Notebook

PyTorch Scholarship Challenge NanoDegree

Flower classification

A scholarship provided by Facebook and Udacity covering concepts behind deep learning and how to build deep learning models using PyTorch.

1. Problem to solve

Build an application to tell the name of flower from an image.

Using convolutional neural network with Transfer Learning I trained an image classifier that is able to identify 102 different flower species with 95% testing accuracy. This image classifier can be used to identify flower species from new images, e.g., in a phone app that tells you the name of the flower your camera is looking at.

2. Available data

102 Category Flower Dataset was given by the Nanodegree program. This dataset contains images of 102 different flower species with lables. These images have different sizes.

3. What approach I took to implement

Link to notebook

  1. Data loading and data preprocessing

    • Load image data
    • Training set: apply transformations such as rotation, scaling, and horizontal flipping (model generalizes / performs better)
    • All datasets: Resize and crop to the appropriate image size (required by pre-trained model)
    • All datasets: Normalize image colors (RGB) using mean and standard deviation of pre-trained model
    • Training set: data shuffled at each epoch
  2. Build and train the model

    • Load a pre-trained network densenet121 (reference) and freeze parameters (Use of transfer Learning)
    • Define a new, untrained neural network as a classifier. The classifier has a hidden layer (ReLU activation) and an output layer (LogSoftmax activation). Assign dropout to reduce overfitting.
    • Assign criterion (NLLLoss, negative log loss) and optimizer (Adam, adaptive moment estimation, reference)
    • Train the classifier layers using forward and backpropagation on GPU
    • Track the loss and accuracy on the validation set to determine the best hyperparameters
  3. Use the trained classifier to predict image content

    • Test trained model on testing set (95% accuracy)
    • Save trained model as checkpoint
    • Write a function that gives top-5 most probable flower names based on image path