/multiclass-perceptron-boundaries

Quick visualization of linear decision boundaries for a scratch-implemented perceptron classifier. Model evaluates loss function with each weight / bias update and will store away best performing parameters for later use.

Primary LanguageJupyter Notebook

Multiclass Perceptron Boundaries

This notebook serves as a quick visualization of the decision boundaries between a multi-class perceptron classifier. First, the general outline of the "franken_ceptron" class is as follows: loop either for a predetermined number of epochs or infinite (breaks at user intervention), initialize weights and bias, predict classes of entire dataset, loop for all rows in dataset, solve for sum of squares error, store parameters based on prediction performance, and then update weights and biases if predicted and true labels do not match. The model is constantly keeping track of the SSE per parameter update, so if the current SSE beats the previous best SSE then all relevant parameters are stored away for later use. The key differentiator between this and a binary perceptron is how the weights are updated. For a binary classifier only one set of weights and a singular bias are modified upon misclassification, the only ones contained in the classifier. This multi-class implementation instead has four sets of weights and biases, one for each label in the dataset (0, 1, 2, and 3). On misclassification, both the true and predicted labels, and associated biases, are updated with either the addition and subtraction of the data point that was classified, respectively (or +1 / -1 for the biases, respectively). This means that for a true label of 0 and wrong prediction of 1, the weights and biases for class 0 will increase by the data point's values and 1, while the same parameters for class 1 will decrease accordingly. The final step is to mesh grid compute and display a contour map of the four classes and their associated regions.