hellomohitsangwan/Speech_Emotion_Recognition
• Classified audio clips of 24 actors (12 male and 12 female) from the RAVDESS dataset into 8 universal emotions. • Implemented both FCNN and CNN models to achieve an accuracy of 64% and 71%, and extracted MFCC, chroma frequency, and mel-spectrogram values using Librosa for audio analysis.
Jupyter Notebook