/Emotion-Detection

Creating a CNN to classify Images based on the motions expressed by subjects in them

Primary LanguageJupyter Notebook

Emotion-Detection

This project is a basic classification project to test the results of 2 different classifier architectures over the fer2013 dataset which contains 48x48 black and white images and details about the emotion expressed in them (7emotions).

Frameworks Used: PyTorch, NumPy, Pandas, Matplotlib.

Preprocessing:

image image

Preprocessing of images involves a simple normalization of images and separation into Training, PrivateTest and PublicTest datasets

image image

Using a 7 convolutin and 3 resolution net with a cyclic, triangular learning rate, which a frequency of 10 epochs results in a validation accuracy of 62% and a test accuravy of 61%, however the accuracy and learning rate stagnate around epoch 25.

image image

Decreasing the number of convolutions to 5 and the number of epochs to 25 results in a significantly better accuracy of 65% in validation and 63% in testing.

LIMITATIONS:

  1. The primary limitation of the model is the usage of monochrmatic data, this is resulting in overfitting of the training data with a training accuracy hittting 99.8%.

The state-of-the-art accuracy in this project is 70%, hence an accuracy of 65% is deemed good.

IMPROVEMETS:

  1. Using coloured images
  2. testing different variations in learning rates.