We aim to help parents understand more about their children's emotions and mental well-being through seamless tracking of the emotions and FFT of the children.
Powered by OpenCV and Deep Learning.
Special thanks to:
Reference: https://github.com/petercunha/Emotion.git, StackOverFlow, SegmentFault
cd children-emotion python3 fft.py python3 emotions.py
Cry signals can be described by their features within two common domains: (1) the time domain and (2) frequency domain. From either of the mentioned domains, a number of significant characteristics can be extracted (Scherer, 1982). In this section, we describe the different domain features applied at different levels of our work.
- Time-domain features a. Intensity. The intensity is also called the loudness, and it is related to the amplitude of the signal. It represents the amount of energy a sound has per unit area. The intensity is defined by the logarithmic measure of a signal section of length N in decibels as follows:
I=10 log (∑n=1Ns2(n)w(n)),
where w is a window function and s(n) is the amplitude of the signal.
Intensity is an essential feature widely used in different applications, such as music mood detection, and an accuracy rate of 99% is achieved, proving the good performance of intensity features.
This model is backed by the research paper by Scherer K. R. (1982). “ The assessment of vocal expression in infants and children,” in Measuring Emotions in Infants and Children.
Install these dependencies with `pip3 install <module name>`
- tensorflow
- numpy
- scipy
- opencv-python
- pillow
- pandas
- matplotlib
- h5py
- keras
Once the dependencies are installed, you can run the project.
`python3 emotions.py`
- Download the fer2013.tar.gz file from here
- Move the downloaded file to the datasets directory inside this repository.
- Untar the file:
tar -xzf fer2013.tar
- Download train_emotion_classifier.py from orriaga's repo here
- Run the train_emotion_classification.py file:
python3 train_emotion_classifier.py
The model used is from this research paper written by Octavio Arriaga, Paul G. Plöger, and Matias Valdenegro.
- Computer vision powered by OpenCV.
- Neural network scaffolding powered by Keras with Tensorflow.
- Convolutional Neural Network (CNN) deep learning architecture is from this research paper.
- Pretrained Keras model and much of the OpenCV code provided by GitHub user oarriaga.