ravdess-dataset

There are 27 repositories under ravdess-dataset topic.

  • marcogdepinto/emotion-classification-from-audio-files

    Understanding emotions from audio files using neural networks and multiple datasets.

    Language:Python4041317133
  • IliaZenkov/transformer-cnn-emotion-recognition

    Speech Emotion Classification with novel Parallel CNN-Transformer model built with PyTorch, plus thorough explanations of CNNs, Transformers, and everything in between

    Language:Jupyter Notebook2228641
  • Data-Science-kosta/Speech-Emotion-Classification-with-PyTorch

    This repository contains PyTorch implementation of 4 different models for classification of emotions of the speech.

    Language:Jupyter Notebook1877735
  • ElenaRyumina/EMO-AffectNetModel

    Dynamic and static models for real-time facial emotion recognition

    Language:Jupyter Notebook764511
  • IliaZenkov/sklearn-audio-classification

    An in-depth analysis of audio classification on the RAVDESS dataset. Feature engineering, hyperparameter optimization, model evaluation, and cross-validation with a variety of ML techniques and MLP

    Language:Jupyter Notebook702117
  • LetianLee/Speech-Emotion-Recognition

    An implementation of Speech Emotion Recognition, based on HuBERT model, training with PyTorch and HuggingFace framework, and fine-tuning on the RAVDESS dataset.

    Language:Jupyter Notebook30217
  • rudrajikadra/Speech-Emotion-Recognition-using-Librosa-library-and-MLPClassifier

    In this project we use RAVDESS Dataset to classify Speech Emotion using Multi Layer Perceptron Classifier

    Language:Jupyter Notebook20216
  • AndreaLombax/Speech_emotion_recognition

    In this work is proposed a speech emotion recognition model based on the extraction of four different features got from RAVDESS sound files and stacking the resulting matrices in a one-dimensional array by taking the mean values along the time axis. Then this array is fed into a 1-D CNN model as input.

    Language:Python8101
  • ThomasRigoni7/Audio-emotion-recognition-RAVDESS

    Implementation of various models to address the speech emotion recognition (SER) task, using python and pytorch.

    Language:Python5201
  • Shreyasi2002/Speech-Emotion-Recognition--1

    Speech Emotion Recognition based on RAVDESS dataset, - Summer 2021, Brain and Cognitive Science.

    Language:Jupyter Notebook4000
  • niveditapatel/SER-models

    This repository is an import of the original repository that contains some of the models we had tested on the RAVDESS and TESS dataset for our research on Speech Emotion Recognition Models.

    Language:Jupyter Notebook3110
  • billy-enrizky/Speech-Emotion-Recognition

    This project focuses on real-time Speech Emotion Recognition (SER) using the "ravdess-emotional-speech-audio" dataset. Leveraging essential libraries and Long Short-Term Memory (LSTM) networks, it processes diverse emotional states expressed in 1440 audio files. Professional actors ensure controlled representation, with 24 actors contributing

    Language:HTML2100
  • danielathome19/Sung-EmotioNN-Detector

    A convolutional neural network trained to classify emotions in singing voices.

    Language:Python2400
  • ayoubelaamri/Speech_Emotion_Recognition

    Web app to detect emotion from speech using a 67% accuracy model built with 2D ConvNets trained on RAVDESS & SAVEE datasets

    Language:Jupyter Notebook1100
  • SpooderManEXE/Speech-Emotion-Recognition-using-MLP

    The SER model is capable of detecting eight different male/female emotions from audio speeches using MLP and RAVDESS model

    Language:Jupyter Notebook1101
  • vikrant-3009/SpeechEmotionRecognition

    Emotion Recognition using Speech with the help of Librosa library, MLPClassifier and RAVDESS Database.

    Language:Jupyter Notebook1100
  • anki005/Speech-Emotion-Recognition-using-Deep-Learning

    Detected different emotions from live audio sample and model is trained on the RAVDESS dataset.

    Language:Jupyter Notebook0100
  • iamgd/SER

    This project is about Speech Emotion Recognition using machine learning models

    Language:Python0200
  • mitul-shalehin/Facial-Emotion-and-Voice-Detection

    Emotion and Voice Detection using Machine Learning Python Project. This Project about to detect human Voice and Facial emotion

    Language:Jupyter Notebook0100
  • OmkarNarvekar001/ART_GENERATION_USING_SPEECH_EMOTIONS

    Translation of speech to image directly without text is an interesting and useful topic due to the potential application in computer-aided design, human to computer interaction, creation of an art form, etc. So we have focused on developing Deep learning and GANs based model which will take speech as an input from the user, analyze the emotions associated with it and accordingly generate the artwork which has been demanded by the user which will in turn provide a personalized experience. The approach used here is convolutional VQGAN to learn a codebook of context-rich visual parts, whose composition is subsequently modeled with autoregressive transformer architecture. Concept of CLIP-Contrastive Language Image-Pre-Training, also uses transformers which is a model trained to determine which caption from a set of captions best fits with a given image is used in our project. The input speech is classified into 8 different emotions using MLP classifier trained of RAVDESS emotional speech audio dataset and this acts as a base filter for the VQGAN model. Text converted from speech plays an important role in producing the final output image using CLIP model. VQGAN+CLIP model together utilizes both emotions and text to generate a more personalized artwork.

    Language:Jupyter Notebook0100
  • prernasingh05/CodeClause_Speech_Emotion_Recognition

    Speech Emotion Recognition project by using Multi-Layer Perceptron Model with several customized attributes for optimal performance.

    Language:Jupyter Notebook01
  • raho0/emotion-recognition-cnn

    emotion recognition using the ravdess dataset with CNN and Time series

    Language:Jupyter Notebook0100
  • Salo-26/SER-Webapp

    This project implements a Speech Emotion Recognition (SER) model using TensorFlow Lite, specifically designed for deployment on microcontrollers like the Arduino Nano BLE33. The model is trained on the RAVDESS dataset and can recognize seven emotions: Angry, Disgust, Fear, Happy, Neutral, Sad, and Surprise.

    Language:Jupyter Notebook0100
  • shudhanshurp/Emotion-Recognition-from-Audio

    Emotion Recognition from Audio (ERA) is an innovative project that classifies human emotions from speech using advanced machine learning techniques.

    Language:JavaScript0100
  • AlessioLucciola/multimodal-advertisement-sentiment-analysis

    Final project for the master's degree in Computer Science course "Multimodal Interaction" at the University of Rome "La Sapienza" (A.Y. 2023-2024).

    Language:Jupyter Notebook102
  • simrann20/QiCNN-algorithm

    Audio-image classification of emotions

  • yhfie/emotion-recognition-audio-streamlit

    My team's Machine Learning final group project about emotion classification web app to help newbie actors to act based on given scripts and emotions

    Language:Jupyter Notebook