affective-computing
There are 157 repositories under affective-computing topic.
Emotional-Text-to-Speech/dl-for-emo-tts
:computer: :robot: A summary on our attempts at using Deep Learning approaches for Emotional Text to Speech :speaker:
ZebangCheng/Emotion-LLaMA
Emotion-LLaMA: Multimodal Emotion Recognition and Reasoning with Instruction Tuning
face-analysis/emonet
Official implementation of the paper "Estimation of continuous valence and arousal levels from faces in naturalistic conditions", Antoine Toisoul, Jean Kossaifi, Adrian Bulat, Georgios Tzimiropoulos and Maja Pantic, Nature Machine Intelligence, 2021
optas/artemis
Learning to ground explanations of affect for visual art.
AMAAI-Lab/Video2Music
Video2Music: Suitable Music Generation from Videos using an Affective Multimodal Transformer model
AmrMKayid/awesome-affective-computing
A curated list of awesome affective computing 🤖❤️ papers, software, open-source projects, and resources
zhongpeixiang/AI-NLP-Paper-Readings
This is my reading list for my PhD in AI, NLP, Deep Learning and more.
MarioRuggieri/Emotion-Recognition-from-Speech
A machine learning application for emotion recognition from speech
imatge-upc/sentiment-2017-imavis
From Pixels to Sentiment: Fine-tuning CNNs for Visual Sentiment Prediction
abikaki/awesome-speech-emotion-recognition
😎 Awesome lists about Speech Emotion Recognition
SMARTlab-Purdue/Husformer
This repository contains the source code for our paper: "Husformer: A Multi-Modal Transformer for Multi-Modal Human State Recognition". For more details, please refer to our paper at https://arxiv.org/abs/2209.15182.
UttaranB127/STEP
Spatial Temporal Graph Convolutional Networks for Emotion Perception from Gaits
Gijom/TEAP
Toolbox for Emotion Analysis using Physiological signals
praveena2j/JointCrossAttentional-AV-Fusion
ABAW3 (CVPRW): A Joint Cross-Attention Model for Audio-Visual Fusion in Dimensional Emotion Recognition
max-talanov/1
personal repository
UttaranB127/speech2affective_gestures
This is the official implementation of the paper "Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with Generative Adversarial Affective Expression Learning".
praveena2j/Joint-Cross-Attention-for-Audio-Visual-Fusion
IEEE T-BIOM : "Audio-Visual Fusion for Emotion Recognition in the Valence-Arousal Space Using Joint Cross-Attention"
pritamqu/SSL-ECG
Self-supervised ECG Representation Learning - ICASSP 2020 and IEEE T-AFFC
ZihengZZH/bipolar-disorder
Multimodal Deep Learning Framework for Mental Disorder Recognition @ FG'20
praveena2j/Cross-Attentional-AV-Fusion
FG2021: Cross Attentional AV Fusion for Dimensional Emotion Recognition
Kaist-ICLab/K-EmoCon_SupplementaryCodes
Supplementary codes for the K-EmoCon dataset
SEERNET/EmoInt
EmoInt provides a high level wrapper to combine various word embeddings and creating ensembles from multiple trained models
sofiabroome/painface-recognition
Using deep recurrent networks to recognize horses' pain expressions in video.
sotirismos/emotion-recognition-conversations
Diploma thesis analyzing emotion recognition in conversations exploiting physiological signals (ECG, HRV, GSR, TEMP) and an Attention-based LSTM network
UttaranB127/Text2Gestures
This is the official implementation of the paper "Text2Gestures: A Transformer-Based Network for Generating Emotive Body Gestures for Virtual Agents".
bagustris/text-vad
VAD analysis of text using some affective lexicon (ANEW, SENTIWORDNET, and VADER)
guangyizhangbci/PARSE
IEEE Transactions on Affective Computing, 2022
pjyazdian/Gesture2Vec
This is an official PyTorch implementation of "Gesture2Vec: Clustering Gestures using Representation Learning Methods for Co-speech Gesture Generation" (IROS 2022).
officialarijit/RECS
Real-time Emotion Recognition using Physiological signals in e-Learning Here one can find the development of realtime emotion recognition using various physiological signals
sailordiary/m3f.pytorch
PyTorch code for "M³T: Multi-Modal Multi-Task Learning for Continuous Valence-Arousal Estimation"
Kilichbek/artemis-speaker-tools-b
Artemis Speaker Tools B
Emilien-mipt/fer-pytorch
Facial expression recognition package built on Pytorch and FER+ dataset from Microsoft.
SMARTlab-Purdue/ros2-foxy-wearable-biosensors
This repository is a new wearable biosensors package for ROS2-Foxy. The ultimate goal of this repo is to expand the biosensors ecosystem in the Human-Robot Interaction (HRI) field. The package currently supports six wearable biosensors that can be used in HRI researches without behavioral constraints caused by limited hardware specifications (e.g., wired devices). We will keep updating this GitHub to support various wearable sensors on ROS 2 system. If you are interested in this project, please contact us.
vishaal27/Multimodal-Video-Emotion-Recognition-Pytorch
A Pytorch implementation of emotion recognition from videos
praveena2j/RJCMA
ABAW6 (CVPR-W) We achieved second place in the valence arousal challenge of ABAW6
bagustris/w2v2-vad
A wrapper for Audeering's wav2vec-based dimensional speech emotion recognition