/Non-verbal-language-and-emotions

Dataset project for face, tone and body language and emotion.

Primary LanguageJavaScriptGNU General Public License v3.0GPL-3.0

Non-verbal language project

Project for develop a set of models and scripts inside of framework 
oriented to human emotions recognition in a whole system or specific parts.

Acknowledgement

Acknowledgement for the people who provided pictures and feedback for improvements in this project:

  • EDLRL
  • Sideloading group
  • Roman Sitelew

Description

This project consists of three well-defined objectives:

  1. To create a model to recognise basic and acquired human emotions.

  2. To create a model to recognise human emotions by tone.

  3. To create a model to recognise body language.

  4. To create a model to recognise hand gesture.

The datasets will be generated by means of Google's project 'The Techeable Machine' and exported to various existing formats. It is open source and collaboration is expected from various people to extend this dataset to generate a deep non-verbal recognition capability in a programme.

It has to be kept in mind that the extension of the dataset has to be organised and discussed directly among all participants, thus reducing cognitive bias in the dataset.

Installation

If on Mac OS:

  • pyenv install 3.10.6
  • cd to the dir where you have cloned this repo
  • pyenv local 3.10.6

Next steps (on both Ubuntu and Mac OS):

  • virtualenv env --python=python==3.10.6
  • $ pip install -r requirements.txt

Execution

  • cd Face
  • python whole_body_record_data_video.py

Note: if the screen is black, try to change the camera index in the code. This line in whole_body_record_data_video.py:

camera = cv2.VideoCapture(0)

testing on:

  • Python 3.10.6
  • Ubuntu 22.04.2 LTS
  • HP Laptop 15s-eq1xxx
  • AMD® Ryzen 7 4700u with radeon graphics × 8

It can also run at least on Mac OS Sonoma 14.7 (M1).

Sources and references: