affective-computing
There are 147 repositories under affective-computing topic.
Emotional-Text-to-Speech/dl-for-emo-tts
:computer: :robot: A summary on our attempts at using Deep Learning approaches for Emotional Text to Speech :speaker:
optas/artemis
Learning to ground explanations of affect for visual art.
face-analysis/emonet
Official implementation of the paper "Estimation of continuous valence and arousal levels from faces in naturalistic conditions", Antoine Toisoul, Jean Kossaifi, Adrian Bulat, Georgios Tzimiropoulos and Maja Pantic, Nature Machine Intelligence, 2021
zhongpeixiang/AI-NLP-Paper-Readings
This is my reading list for my PhD in AI, NLP, Deep Learning and more.
AmrMKayid/awesome-affective-computing
A curated list of awesome affective computing 🤖❤️ papers, software, open-source projects, and resources
ZebangCheng/Emotion-LLaMA
Emotion-LLaMA: Multimodal Emotion Recognition and Reasoning with Instruction Tuning
AMAAI-Lab/Video2Music
Video2Music: Suitable Music Generation from Videos using an Affective Multimodal Transformer model
MarioRuggieri/Emotion-Recognition-from-Speech
A machine learning application for emotion recognition from speech
SMARTlab-Purdue/Husformer
This repository contains the source code for our paper: "Husformer: A Multi-Modal Transformer for Multi-Modal Human State Recognition". For more details, please refer to our paper at https://arxiv.org/abs/2209.15182.
imatge-upc/sentiment-2017-imavis
From Pixels to Sentiment: Fine-tuning CNNs for Visual Sentiment Prediction
abikaki/awesome-speech-emotion-recognition
😎 Awesome lists about Speech Emotion Recognition
UttaranB127/STEP
Spatial Temporal Graph Convolutional Networks for Emotion Perception from Gaits
Gijom/TEAP
Toolbox for Emotion Analysis using Physiological signals
max-talanov/1
personal repository
UttaranB127/speech2affective_gestures
This is the official implementation of the paper "Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with Generative Adversarial Affective Expression Learning".
ZihengZZH/bipolar-disorder
Multimodal Deep Learning Framework for Mental Disorder Recognition @ FG'20
praveena2j/JointCrossAttentional-AV-Fusion
ABAW3 (CVPRW): A Joint Cross-Attention Model for Audio-Visual Fusion in Dimensional Emotion Recognition
pritamqu/SSL-ECG
Self-supervised ECG Representation Learning - ICASSP 2020 and IEEE T-AFFC
praveena2j/Joint-Cross-Attention-for-Audio-Visual-Fusion
IEEE T-BIOM : "Audio-Visual Fusion for Emotion Recognition in the Valence-Arousal Space Using Joint Cross-Attention"
SEERNET/EmoInt
EmoInt provides a high level wrapper to combine various word embeddings and creating ensembles from multiple trained models
sofiabroome/painface-recognition
Using deep recurrent networks to recognize horses' pain expressions in video.
UttaranB127/Text2Gestures
This is the official implementation of the paper "Text2Gestures: A Transformer-Based Network for Generating Emotive Body Gestures for Virtual Agents".
praveena2j/Cross-Attentional-AV-Fusion
FG2021: Cross Attentional AV Fusion for Dimensional Emotion Recognition
pjyazdian/Gesture2Vec
This is an official PyTorch implementation of "Gesture2Vec: Clustering Gestures using Representation Learning Methods for Co-speech Gesture Generation" (IROS 2022).
bagustris/text-vad
VAD analysis of text using some affective lexicon (ANEW, SENTIWORDNET, and VADER)
guangyizhangbci/PARSE
IEEE Transactions on Affective Computing, 2022
Kaist-ICLab/K-EmoCon_SupplementaryCodes
Supplementary codes for the K-EmoCon dataset
Kilichbek/artemis-speaker-tools-b
Artemis Speaker Tools B
sailordiary/m3f.pytorch
PyTorch code for "M³T: Multi-Modal Multi-Task Learning for Continuous Valence-Arousal Estimation"
Emilien-mipt/fer-pytorch
Facial expression recognition package built on Pytorch and FER+ dataset from Microsoft.
sotirismos/emotion-recognition-conversations
Diploma thesis analyzing emotion recognition in conversations exploiting physiological signals (ECG, HRV, GSR, TEMP) and an Attention-based LSTM network
officialarijit/RECS
Real-time Emotion Recognition using Physiological signals in e-Learning Here one can find the development of realtime emotion recognition using various physiological signals
SMARTlab-Purdue/ros2-foxy-wearable-biosensors
This repository is a new wearable biosensors package for ROS2-Foxy. The ultimate goal of this repo is to expand the biosensors ecosystem in the Human-Robot Interaction (HRI) field. The package currently supports six wearable biosensors that can be used in HRI researches without behavioral constraints caused by limited hardware specifications (e.g., wired devices). We will keep updating this GitHub to support various wearable sensors on ROS 2 system. If you are interested in this project, please contact us.
bagustris/w2v2-vad
A wrapper for Audeering's wav2vec-based dimensional speech emotion recognition
IoBT-VISTEC/Deep-Learning-for-EEG-Based-Biometrics
Affective EEG-Based Person Identification Using the Deep Learning Approach (IEEE Transactions on Cognitive and Developmental Systems)
vishaal27/Multimodal-Video-Emotion-Recognition-Pytorch
A Pytorch implementation of emotion recognition from videos