This project is designed to detect human emotions from images and videos using a deep learning model. The model identifies emotions from facial expressions and is implemented using PyTorch.
EmotionNet
is a convolutional neural network designed to classify seven different emotions from facial images. It is built using PyTorch and trained on the FER-2013 dataset.
Transforms include resizing, normalization, random flips, and rotations to make the model robust to various facial orientations and lighting conditions.
train.py is used to train and validate the model over multiple epochs, displaying training and validation losses and accuracies, and plotting confusion matrices to visualize model performance.
detect.py utilizes MTCNN for face detection and the trained EmotionNet model for emotion classification. It supports processing single images, video files, and live webcam feeds.
To run this project, you need Python 3.x and the following libraries:
- PyTorch
- torchvision
- OpenCV
- facenet_pytorch
- numpy
- matplotlib
- seaborn
- scikit-learn
- PIL
- tkinter
You can install the required libraries using pip:
pip install -r requirements.txt
Run the following command to use the pre-trained model for detecting emotions in images or videos:
python app.py
Run the following command to use the pre-trained model for detecting emotions in images or videos:
python detect.py
To train the model from scratch:
python train.py
- Emotion Detection from Images: Load images and detect emotions from faces using a trained neural network model.
- Emotion Detection from Videos: Process videos to detect and label emotions frame by frame.
- Webcam Support: Real-time emotion detection using a webcam.