Python implementation of a simple MLP without using external packages
The MLP uses data of a make-up example matrix:
training_data = [[-1, -1, -1, -1, -1],
[-1, -1, -1, -1, 1],
[-1, -1, -1, 1, -1],
[-1, -1, -1, 1, 1],
[-1, -1, 1, -1, -1],
[-1, -1, 1, -1, 1],
[-1, -1, 1, 1, -1],
[-1, -1, 1, 1, 1],
[-1, 1, -1, -1, -1],
[-1, 1, -1, -1, 1],
[-1, 1, -1, 1, -1],
[-1, 1, -1, 1, 1],
[-1, 1, 1, -1, -1],
[-1, 1, 1, -1, 1],
[-1, 1, 1, 1, -1],
[-1, 1, 1, 1, 1],
[ 1, -1, -1, -1, -1],
[ 1, -1, -1, -1, 1],
[ 1, -1, -1, 1, -1],
[ 1, -1, -1, 1, 1],
[ 1, -1, 1, -1, -1],
[ 1, -1, 1, -1, 1],
[ 1, -1, 1, 1, -1],
[ 1, -1, 1, 1, 1],
[ 1, 1, -1, -1, -1],
[ 1, 1, -1, -1, 1],
[ 1, 1, -1, 1, -1],
[ 1, 1, -1, 1, 1],
[ 1, 1, 1, -1, -1],
[ 1, 1, 1, -1, 1],
[ 1, 1, 1, 1, -1],
[ 1, 1, 1, 1, 1]]
For each list in the matrix, if there are odd number of occurences of 1, then the label should be 1. If there are even number of occurences of 1, then it should be -1.
For example [-1, -1, -1, -1, -1]
should be predicted as -1 because the number of occurences of 1 is 0, which is an even number.
For each epoch, first do forward-feeding with the softsign activation func of this form: , then perform back-propagation, calculate the error and do the weight update. Training will not stop until the error is smaller than 0.05.