/Fatigue-Detection-using-Deep-Learning

This is a project based on a research paper to detect fatigue levels of a person through a photograph.

Primary LanguagePython

DOI

Fatigue Detection using Deep Learning

This repository contains a project based on the research to detect fatigue levels of a person through a photograph. For this project the main facial cues used to detect fatigue levels are: Undereyes, Eyes, Mouth, Nose and Skin.

Problem Statement

The complexities of fatigue have drawn much attention from researchers across various disciplines. Short-term fatigue may cause safety issue while driving; thus, dynamic systems were designed to track driver fatigue. Longterm fatigue could lead to chronic syndromes, and eventually affect individuals physical and psychological health. Traditional methodologies of evaluating fatigue not only require sophisticated equipment but also consume enormous time.

Our Proposal

In this project, we attempt to develop a novel and efficient method to predict individual’s fatigue rate by scrutinising human facial cues. Our goal is to predict fatigue rate based on a single photo. Our work represents a promising way to assess sleep-deprived fatigue, and our project will provide a viable and efficient computational framework for user fatigue modelling in large-scale.

Architecture

The architecture for this project is shown in the picture below. The image of a face is taken as input from which the facial landmarks are detected and cropped out. These cropped out facial landmarks such as eyes, undereyes, nose, mouth along with the entire face image for the skin is fed into individual models trained on these specific features. The individual models return a value which corresponds to the fatigue levels. These values are then taken as a weighted sum (where eyes and undereyes are given more weightage) which is used as the final value to determine the fatigue level of a person.

Setup Instructions

  1. Clone the entire repository into your local machine.
  2. Download contents of object_detection folder from zenodo and place all the contents in the folder.
  3. Download the models from zenodo and place all the contents in models/image_classification.
  4. Open Anaconda Command Prompt and Setup a new environment

     C:\> conda create -n FatigueDetection pip python=3.6
    

    Activate the environment and upgrade pip

    C:\> activate FatigueDetection
    (FatigueDetection) C:\>python -m pip install --upgrade pip
    

    All other requirements can be installed using requirements.txt

     (FatigueDetection) C:\>pip install -r requirements.txt
    
  5. After all the package installations has been done navigate to the directory where the project has been downloaded and run "config.py":
    (FatigueDetection) C:\> python config.py
    
  6. After "config.py" has been run now you can run "app.py":
    (FatigueDetection) C:\> python app.py
    

    After running the above command you should get a screen that looks like this. Copy the url right after Running on and paste it in your browser.

  7. After running the python script and copying the link to the browser you should get this screen.

  8. This is the homepage of the project.

    Here you can upload the image using the browse button and selecting the image to check the fatigue levels. Here we have selected an image and are ready to detect the fatigue levels.

    After clicking on the predict button this is the result that is found.

    Some information based on the results


    Final score given to the image:


    Actual score given to each individual facial landmark

    Note : The lower the score, the higher level of fatigue.

    Current aggregation used for final score:

    (((sum of left eye and right eye scores) / 2)*0.4) + (((sum of left under-eye and right under-eye scores)/2)*0.55) + (((sum of nose, face and mouth scores)/3)*0.05
    

    This aggregation has been done through basic intuitive hypothesis. Please feel free to assign weights according your own hypothesis. For eg. - Linear Regression can be used to assign specific weights to each part of the face.

    Main contributors of the project:

    1. Sreyan Ghosh
    2. Sherwin Joseph
    3. Rohan Roney
    4. Samden Lepcha

    Extras:

    Credits:

    You can follow this link for an excellent tutorial to make train your own custom object detection model. Our under-eye object detection as heavily based on this.

    You can follow this link from pyimagesearch for facial part extraction using dlib. Our facial part extraction is heavily based on this.

    Data:

    You can download the data used for the project from here.

    P.S. - If you happen to use any of our code or data for your experiment you can cite our work/data using the zenodo. Just click on the zenodo badge above and you will be redirected to zenodo where you can find how to cite our work.