/Echo

Machine Learning model to classify emotions extracted from sound waves

Primary LanguageJupyter Notebook

Echo

Machine Learning model to classify emotions extracted from speech samples.

Datasets:

  • RAVDESS:
    • This dataset includes around 1500 audio file input from 24 different actors
    • 12 male and 12 female where these actors record short audios in 8 different emotions
    • {1 = neutral, 2 = calm, 3 = happy, 4 = sad, 5 = angry, 6 = fearful, 7 = disgust, 8 = surprised}
    • Each audio file is named such that the 7th character is consistent with the different emotions that they represent.

Architecture

alt text