multimodal-sentiment-analysis
There are 44 repositories under multimodal-sentiment-analysis topic.
declare-lab/MELD
MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation
declare-lab/multimodal-deep-learning
This repository contains various models targetting multimodal representation learning, multimodal fusion for downstream tasks such as multimodal sentiment analysis.
thuiar/MMSA
MMSA is a unified framework for Multimodal Sentiment Analysis.
microsoft/UniVL
An official implementation for " UniVL: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation"
YeexiaoZheng/Multimodal-Sentiment-Analysis
多模态情感分析——基于BERT+ResNet的多种融合方法
declare-lab/Multimodal-Infomax
This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis, accepted at EMNLP 2021.
thuiar/MMSA-FET
A Tool for extracting multimodal features from videos.
declare-lab/contextual-utterance-level-multimodal-sentiment-analysis
Context-Dependent Sentiment Analysis in User-Generated Videos
shamanez/Self-Supervised-Embedding-Fusion-Transformer
The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.
thuiar/Cross-Modal-BERT
CM-BERT: Cross-Modal BERT for Text-Audio Sentiment Analysis(MM2020)
PreferredAI/vista-net
Code for the paper "VistaNet: Visual Aspect Attention Network for Multimodal Sentiment Analysis", AAAI'19
Haoyu-ha/ALMT
Learning Language-guided Adaptive Hyper-modality Representation for Multimodal Sentiment Analysis
anita-hu/MSAF
Offical implementation of paper "MSAF: Multimodal Split Attention Fusion"
abikaki/awesome-speech-emotion-recognition
😎 Awesome lists about Speech Emotion Recognition
declare-lab/BBFN
This repository contains the implementation of the paper -- Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal Sentiment Analysis
NUSTM/FacialMMT
A Facial Expression-Aware Multimodal Multi-task Learning Framework for Emotion Recognition in Multi-party Conversations (ACL 2023)
Vincent-ZHQ/DMER
A survey of deep multimodal emotion recognition.
declare-lab/hfusion
Multimodal sentiment analysis using hierarchical fusion with context modeling
XL2248/MSCTD
Code and Data for the ACL22 main conference paper "MSCTD: A Multimodal Sentiment Chat Translation Dataset"
Haoyu-ha/LNLN
Towards Robust Multimodal Sentiment Analysis with Incomplete Data
roshansridhar/Multimodal-Sentiment-Analysis
Engaged in research to help improve to boost text sentiment analysis using facial features from video using machine learning.
declare-lab/MSA-Robustness
NAACL 2022 paper on Analyzing Modality Robustness in Multimodal Sentiment Analysis
Kaicheng-Yang0828/Multimodal-Sentiment-Analysis-Paper-list
This paper list is about multimodal sentiment analysis.
declare-lab/MM-Align
[EMNLP 2022] This repository contains the official implementation of the paper "MM-Align: Learning Optimal Transport-based Alignment Dynamics for Fast and Accurate Inference on Missing Modality Sequences"
sverma88/DeepCU-IJCAI19
DeepCU: Integrating Both Common and Unique Latent Information for Multimodal Sentiment Analysis, IJCAI-19
MKMaS-GUET/KuDA
Codebase for EMNLP 2024 Findings Paper "Knowledge-Guided Dynamic Modality Attention Fusion Framework for Multimodal Sentiment Analysis"
vkeswani/IITK_Memotion_Analysis
Bimodal and Unimodal Sentiment Analysis of Internet Memes (Image+Text)
imadhou/multimodal-sentiment-analysis
Multimodal sentiment analysis
WangJingyao07/Multimodal-Sentiment-Analysis-for-Health-Navigation
Emotion recognition methods through facial expression, speeches, audios, and multimodal data
mongodb-developer/Google-Cloud-Sentiment-Chef
Sentiment Analysis, Summarization, Tagging with MongoDB Atlas and Gemini — Google Cloud's AI model
Baibhav-nag/Multimodal-emotion-recognition
Multimodal emotion recognition on two benchmark datasets RAVDESS and SAVEE from audio-visual information using CNN(Convolutional Neural Networks)
cleopatra-itn/fair_multimodal_sentiment
Code and Splits for the paper "A Fair and Comprehensive Comparison of Multimodal Tweet Sentiment Analysis Methods", In Proceedings of the 2021 Workshop on Multi-Modal Pre-Training for Multimedia Understanding (MMPT ’21), August 21, 2021,Taipei, Taiwan
ichalkiad/cryptogpcausality
This repository contains the code for the paper "Sentiment-driven statistical causality in multimodal systems", by Ioannis Chalkiadakis, Anna Zaremba, Gareth W. Peters and Michael J. Chantler.
ttrikn/EMVAS
Open source code for paper: End-to-End Multimodal Emotion Visualization Analysis System
elsobhano/Multimodal-Emotion-Recognition
Multimodal Emotion Recognition using ClipBERT.