/speech-emotion-recognition

In this project, we will use Convolution Neural Networks to detect Emotions in Conversations. To train our model and analyze the acoustic features of the audio data recording, we will exclusively use English audio datasets from the MSP Podcast Corpus, CREMA-D, Toronto emotional speech set (TESS), and Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS). We plan to identify the following emotions: happiness, sadness, neutral, anger, disgust, and fear. Given the increasing levels of sadness in the midst of the covid-19, our long-term goal is to expand this project as a tool to be used in a voice-based assistant on a suicide prevention hotline.

Primary LanguageJupyter Notebook

This repository is not active