This repo contains fun and basic projects to play with deep learning concepts, containing experiments like
- code_link Implementation of logic gates like AND, OR, XOR(proving that xor is not linearly seperable) using single perceptron and activation function as Unit step function.
- code_link Implementation of perceptron learning rule for planar seperation , dataset used is IRIS. This experiments shows that perceptron with step activation function will not give perfect seperation line like linear SVM does.
- code_link Implementation of logic gates using sigmoid activation function using keras.
- code_link Implementation of ADALINE with MNIST dataset for classifying hand written 0's and other numbers similar to 0's.
- code_link Implementation of Perceptron with UNIT_STEP function for classifying hand written 0's and other numbers similar to 0's.
- code_Link Implementation of single perceptron for classification problem on same previously used dataset with cost as sum of squared errors.
- code_link Implementaion of 2 layered Neural network with sigmoid as activation function with SSE as cost function. Number of nodes in hidden layer is 2.
- code_link Implementation and cross validation of MNIST classification using 2 layered neural network with sigmoid activation in both layers(hidden and output) and loss function as cross entrophy. (logistic loss)
MNIST dataset is hand crafted for this experiments, dataset is divided into 2 classes,
- Test - class0 - Contains hand written 0's - 500 instances and class 1 - Contains hand written 2's 3's 6's 9's 5's - 500 instances
- Train - class0 - Contains hand written 0's - 500 instances and class 1 - Contains hand written 2's 3's 6's 9's 5's - 500 instances
- Download dataset and extract in parent folder where jupyter notebook is located. (put dataset in parent working directory).
Experiments above were implemented using python2 and python3 , implementaion of numpy exp function may result in cost as NaN, in such cases update your packages or try with other version of python.
- Accuracy stated in the ipython Notebook at last cell are based on datasets which follows different distribution, so run the experiments on your own custom dataset and check accuray.