To run: python perceptrons.py
Note : I very foolishly did not use numpy
arrays here.
The following algorithms are implemented:
Single-sample perceptron
Single-sample perceptron with margin
Relaxation algorithm with margin
Widrow-Hoff or Least Mean Squared (LMS) Rule
A graph is plotted showing the set of linearly separable points and the separation boundary generated by each algorithm. The weight vectors learned and also the time taken by each algorithm is output. The various factors like initial weight vector, learning rate, theta, margin, etc. can be tweaked and the results of the various algorithms can be seen.
Observations on the following have been made:
- In each case, plot the data points in a graph (e.g. red: class- omega1 and blue: class- omega2) and also show the weight vector learnt from all of the above algorithms in the same graph (labeling clearly to distinguish different solutions).
- Run each of the above algorithms for various initial values of the weight vector, and comment on the dependence of convergence time (run-time) on initialization.
- Similarly explore the effect of adding different margins on the final solution as well as on the convergence (run-) time for algorithms (B) and (C).
- In case of LMS, add extra points to this data for making it linearly non-separable and show how
LMS provides an acceptable decision boundary with some classification error. For this run as
python widrowhoff.py
- As part of the submission include the code for each of the algorithms along with a small report that explains the algorithms, implementation details, the results and their analysis.
To run: python neuralnet.py
This was great fun to code up and an amazing feeling when the classifier works correctly.
A simple supervised, feed-forward, back-propagation network has been implemented with sigmoid activation function for the problem of optical character recognition for any three digits between 0 and 9.
- Data: Use any three digits between 0 and 9 from the optdigits data set that comes from the
UCI Machine Learning Repository
. (training and cross validation files have been included) - Preporcessing: Down-sampled images (to 8x8).
- Classifier: 20 units in the Hidden layer and 2 units in the Output layer have been used here.
- Result: The number of correctly classified samples from the cross validation file is output as a percentage.