Emotion-Based-Classification-of-Music-Using-Lyrics-and-Audio

The main objective of this project is to build a model for Music Emotion Recognition(MER) using both components of music which is lyrics and audio by comparing various preprocessing methods, feature engineering technique and applying various classification models to 16k lyrics and audio data.

Objective is to classify the music into 4 classes of emotion based on Russell’s V-A model: sad, angry, happy and calm.

The accuracy by integrating the result from lyrics and audio data was 90.14%

Language: Python

Description about files uploaded:

Presentation - Contains a ppt about the project

Lyrics - Contains the code to web scrape lyrics data and preprocess the lyrics

Lyrics Classification - Contains the code to classify emotions using lyrics features

Dataset - Contains the code to web scrape audio data

Feature Extraction - Contains the code to extract features from audio dataset using various preprocessing and feature engineering methods

Emotion Recognition - Contains the code to classify emotions based on features using various classification models and result integration

train.xlsx - Contains all the features extracted from trainD audio dataset in excel file

test.xlsx - Contains all the features extracted from test audio dataset in excel file

validation.xlsx - Contains all the features extracted from validation audio dataset in excel file