/Speach-emotion-recognition-

Speach-emotion recognition using CNN

Primary LanguageJupyter Notebook

Speach-emotion-recognition-

Speach-emotion recognition using CNN

the main idea that we use mel spectgoram to try to make the network model human hearing perception then we use Alex-net CNN architecture we add pitched samples to decrease overfitting and we add 10 %empty samples distrvuted equally over the classes to avoid taking silence as a feature

Usefull Links
Mel-spectogram :https://towardsdatascience.com/getting-to-know-the-mel-spectrogram-31bca3e2d9d0
Dataset :https://www.kaggle.com/dmitrybabko/speech-emotion-recognition-en
AlexNet :https://towardsdatascience.com/implementing-alexnet-cnn-architecture-using-tensorflow-2-0-and-keras-2113e090ad98