/AuditoryUnveil-DecodingEmotionsInSpeech

Uses audio analysis along with a Convolutional Neural Network model to detect emotions in speech.

Primary LanguageJupyter Notebook

AuditoryUnveil-DecodingEmotionsInSpeech

Uses audio analysis along with a Convolutional Neural Network model to detect emotions in speech.

Auditory_Unveil Research Poster

Overview

This project focuses on using audio analysis techniques in combination with a Convolutional Neural Network (CNN) model to detect emotions in speech. It's designed to provide a framework for understanding and recognizing emotions in spoken language, which has practical applications in various fields, including human-computer interaction, sentiment analysis, and mental health monitoring.

Features

  • Audio Preprocessing: Utilize audio processing libraries to extract relevant features from speech signals.
  • Convolutional Neural Network: Implement a CNN model to learn emotional patterns from audio data.
  • Emotion Classification: Identify emotions such as happiness, sadness, anger, and more.
  • Visualization: Visualize emotions through graphs, spectrograms, or other graphical representations.

Installation

  1. Clone the repository to your local machine:

    git clone https://github.com/sudocanttype/Auditory_Unveil.git
    cd Auditory_Unveil
  2. Install the Python dependencies

    pip install -r requirements.txt
  3. Run the Jupyter Notebook

    jupyter notebook ACM2023.ipynb