/Face-Emotion-Vision

An app that can recognize students and identify whether or not they are engaged in the lecture. Project Report:

Primary LanguageJupyter Notebook

Recognizing Students and Detecting Student Engagement with Image Processing

Problem Motivation

With COVID-19, formal education was interrupted in all countries and the importance of distance learning has increased. It is possible to teach any lesson with various communication tools but it is difficult to know how far this lesson reaches the students. In this project, we aim to monitor the students in a classroom or in front of the computer with a camera, recognizing their faces, their head poses, and scoring their distraction to detect student engagement based on their head poses and Eye Aspect Ratios. The output of this project will be attendance and emotions records for each day. This data can be used for further data analysis.

Non-primitive Functions

  • Histogram Equalization
  • Gamma Correction
  • Rotating Mask
  • Median Filter
  • Sobel Operator
  • Laplace Operator
  • Canny Edge
  • Morphological Opening
  • Morphological Closing
  • LBP (Local Binary Pattern)
  • Histogram of Gradients (HoG)
  • Haar Cascade

Block Diagram

image

Additional Comments

Lighting conditions pose a challenge to our system. To solve this, we will use pre-processing techniques that will normalize contrast, standardize brightness conditions and make the images more consistent. These include histogram equalization and gamma correction.

Another challenge facing us is that of face orientations. For face detection, we will use the histogram of gradients algorithm, a gradient-orientation based algorithm that is effective in handling different face orientations. The algorithm divides the image into small regions in order to generate local descriptors for edges and contours. In addition, the histogram is often normalized to reduce the effects of lighting and contrast. For face recognition, we will use local binary pattern (LBP), a statistical texture measure for feature extraction that is robust to face orientations. LBP measures the relationships between the intensities of neighboring pixels in a circular pattern around the central pixel. This circular neighborhood structure helps in capturing texture information that remains relatively stable even if the image is slightly rotated.

We may use edge detection techniques to increase the accuracy of our face recognition model. We will apply the machine learning model on the images before and after face recognition using morphological and texture analysis to compare accuracies.

We may use data augmentation if the data was limited or if validation wasn’t accurate enough. This will improve the system’s accuracy when dealing with different face alignments.

Sample Images

Engaged

Confused

image

Engaged

image

Frustrated

image

Not Engaged

Bored

image

Drowsy

image

Looking Away

image

References