Neural Networks - Simple Back prop implementation from Scratch

Implemented Backpropagation for training a Neural Network on MNIST data; tested on MNIST and an external data set.

  • Using sigmoid as the activation function for the hidden layer, softmax for the output layer.
  • Hyperparameters tuned; editable.
  • Mini-Batch gradient descent used to train weights.