/Emotion-Recognition

A Logistic Regression model to detect a person's emotion by their face's image.

Primary LanguagePython

Emotion-Recognition

This is my attempt at Emotion Recognition through images.

My first approach was going to be simply training a Convolutional Neural Network over the Cohn-Kanade dataset. But, I knew they were techniques to simplify this. Hence, I did some research and applied a different technique detailed below.

First I downloaded the dataset and sorted it. All the images of a particular emotion class are saved in a single directory outside the directory in which this repository is saved. For example, the numerical code for 'surprise' emotion is '7', hence all images labeled as surprised are saved in a single directory with directory name- faces-data/7/00000.png. This helped in easier understanding for later use.

How to use

  1. Clone the repository.
  2. Use the terminal to run the "emotion-recognition.py" file. It takes an argument with the image path with a face.
  3. The run command should look like this- python emotion-recognition.py -i"path-to-image"
  4. The program will now load all the required files and detect the emotion.
  5. The output will be a string containing the person's emotion.

Explanation of the method use to detect emotion

The following text describes the method used to detect the emotion.

Extract Facial Features

I used Haar Cascade and dlibs to detect faces and extract the 68 landmarks. The landmarks look like this- alt text

The reason behind landmark detection is that these landmarks can be used to approximate the facial muscles and facial muscles can be used to classify emotions more easily. Hence, using these landmarks, I extracted a mask approximating the facial muscles. The mask looks like this- alt text

Now, all I calculated was length of the lines shown in the above face mask and use them to train a model.

Model Training

I tried many models and found that Logistic Regression gave the best results. Hence, that is the one I used.

I trained my model over 5680 images and used a validation set of over 1420 images.

I achieved an accuracy of 71.26% over the validation set.

Description of files

  1. emotion-recognition.py - The file to run the trained model over a real world image.
  2. crop_face_area.py- This file detects a face in the image, crops the image to a size near but larger than that of the face and resized it to 640x490 pixels and grayscale format. This is similar to cohn-kanade dataset images.
  3. extract_faces_save.py- File to extract faces from raw dataset and sort them in above mentioned format.
  4. extract_facial_landmarks_data.py- Calculate the face landmarks, and get distance between landmarks as per the face mask and save in a csv file.
  5. landmarks_calculation.py- Two lists to hold which landmark index to use for feature extraction.
  6. model_train.py- The file to train the Logistic Regression model over the extracted feature set.