yuandapiaoliang's Stars
state-spaces/mamba
Mamba SSM architecture
redotvideo/mamba-chat
Mamba-Chat: A chat LLM based on the state-space model architecture 🐍
Dao-AILab/causal-conv1d
Causal depthwise conv1d in CUDA, with a PyTorch interface
YeexiaoZheng/Multimodal-Sentiment-Analysis
多模态情感分析——基于BERT+ResNet的多种融合方法
tzirakis/Multimodal-Emotion-Recognition
This repository contains the code for the paper `End-to-End Multimodal Emotion Recognition using Deep Neural Networks`.
shamanez/Self-Supervised-Embedding-Fusion-Transformer
The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.
Sreyan88/MMER
Code for the InterSpeech 2023 paper: MMER: Multimodal Multi-task learning for Speech Emotion Recognition
wenliangdai/Modality-Transferable-MER
Modality-Transferable-MER, multimodal emotion recognition model with zero-shot and few-shot abilities.
PINTO0309/OpenVINO-EmotionRecognition
OpenVINO+NCS2/NCS+MutiModel(FaceDetection, EmotionRecognition)+MultiStick+MultiProcess+MultiThread+USB Camera/PiCamera. RaspberryPi 3 compatible. Async.
sunlicai/EMT-DLFR
Efficient Multimodal Transformer with Dual-Level Feature Restoration for Robust Multimodal Sentiment Analysis (TAC 2023)
thuiar/TCL-MAP
TCL-MAP is a powerful method for multimodal intent recognition (AAAI 2024)
guanghaoyin/RTCAN-1D
Pytorch code for our TOMM2022 paper "A Multimodal framework for large scale Emotion Recognition by Fusing Music and Electrodermal Activity Signals"
iiscleap/multimodal_emotion_recognition
Implementation of the paper "Multimodal Transformer With Learnable Frontend and Self Attention for Emotion Recognition" submitted to ICASSP 2022
melanchthon19/multimodal_cnn
Multimodal (text, acoustic, visual) Sentiment Analysis and Emotion Recognition on CMU-MOSEI dataset.
fuyahuii/ConSK-GCN
The PyTorch code for paper: "CONSK-GCN: Conversational Semantic- and Knowledge-Oriented Graph Convolutional Network for Multimodal Emotion Recognition."
ECNU-Cross-Innovation-Lab/LGCCT
(DOI: 10.3390/e24071010) LGCCT: A Light Gated and Crossed Complementation Transformer for Multimodal Speech Emotion Recognition
xiaoyuan1996/Res-Trans
First place in the 2020 iFLYTEK Multimodal Emotion Analysis and Recognition Challenge
ttrikn/EMVAS
Open source code for paper: End-to-End Multimodal Emotion Visualization Analysis System
iiscleap/CANAVER
Code for Multimodal Cross Attention Network for Audio Visual Emotion Recognition
goldeneave/NovelSentimentAnalysis
A new multimodal sentiment analysis method, could be used for social media sentiment analysis.