README.md
This project combines facial landmark detection and emotion recognition to analyze emotions in a video using artificial intelligence. It utilizes computer vision techniques, TensorFlow, and Mediapipe to detect facial landmarks and a pre-trained model to predict emotions. The resulting video is annotated with emotion labels for a more immersive viewing experience.
Ensure you have the following dependencies installed:
- OpenCV (cv2)
- Mediapipe
- NumPy
- TensorFlow (with Keras)
- Various other Python libraries
To install the required dependencies, run the following command:
pip install opencv-python mediapipe numpy tensorflow
-
FER (Facial Emotion Recognition) Model:
-
Download the FER model files from the following link: FER Model
Place the downloaded files in the project directory, and update the file path in the script:
fer_model = load_model("path/to/FER_model.h5") # Replace with the actual path
-
-
Emotion Recognition Retail Model:
-
Download the Emotion Recognition Retail model files from the OpenVINO Model Zoo: Emotion Recognition Retail Model
Place the downloaded files (emotions-recognition-retail-0003.bin and emotions-recognition-retail-0003.xml) in the project directory, and update the file paths in the script:
emotion_model_bin = "path/to/emotions-recognition-retail-0003.bin" # Replace with the actual path emotion_model_xml = "path/to/emotions-recognition-retail-0003.xml" # Replace with the actual path
-
-
Other Dependencies:
-
Install the necessary Python libraries by running:
pip install opencv-python mediapipe numpy tensorflow
-
Make sure to have OpenCV, Mediapipe, NumPy, and TensorFlow installed.
-
- Clone this repository:
git clone <repository-url>
cd <repository-directory>
- Install dependencies:
pip install -r requirements.txt
-
Download the FER (Facial Emotion Recognition) model:
Replace
"path/to/FER_model.h5"
with the actual path where the FER model is located. -
Run the script:
python emotion_recognition.py
The script will process the input video, annotate facial landmarks, and display the video with emotion labels.
- The script assumes the input video is located at
"nosubs.mp4"
. Replace this with the actual path of your video. - The output video will be saved as
"nosubs-output.mp4"
. - Press
q
to exit the application.
For more details on the project and its components, refer to the code documentation and associated resources.
Feel free to explore and modify the code to suit your specific requirements.
Note: Adjust the paths, video filenames, and other configurations based on your local setup.