jackchen69's Stars
openai/whisper
Robust Speech Recognition via Large-Scale Weak Supervision
1c7/chinese-independent-developer
👩🏿💻👨🏾💻👩🏼💻👨🏽💻👩🏻💻**独立开发者项目列表 -- 分享大家都在做什么
adapter-hub/adapters
A Unified Library for Parameter-Efficient and Modular Transfer Learning
linto-ai/whisper-timestamped
Multilingual Automatic Speech Recognition with word-level timestamps and confidence
declare-lab/awesome-sentiment-analysis
Reading list for Awesome Sentiment Analysis papers
audeering/w2v2-how-to
How to use our public wav2vec2 dimensional emotion model
kharrigian/mental-health-datasets
An evolving list of electronic media data sets used to model mental-health status.
ydwen/opensphere
A hyperspherical face recognition library based on PyTorch
uniBruce/Mead
MEAD: A Large-scale Audio-visual Dataset for Emotional Talking-face Generation [ECCV2020]
YeexiaoZheng/Multimodal-Sentiment-Analysis
多模态情感分析——基于BERT+ResNet的多种融合方法
vasistalodagala/whisper-finetune
Fine-tune and evaluate Whisper models for Automatic Speech Recognition (ASR) on custom datasets or datasets from huggingface.
declare-lab/MISA
MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis
b04901014/FT-w2v2-ser
Official implementation for the paper Exploring Wav2vec 2.0 fine-tuning for improved speech emotion recognition
shamanez/BERT-like-is-All-You-Need
The code for our INTERSPEECH 2020 paper - Jointly Fine-Tuning "BERT-like'" Self Supervised Models to Improve Multimodal Speech Emotion Recognition
shamanez/Self-Supervised-Embedding-Fusion-Transformer
The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.
asappresearch/sew
mechanicalsea/lighthubert
LightHuBERT: Lightweight and Configurable Speech Representation Learning with Once-for-All Hidden-Unit BERT
usc-sail/fed-multimodal
[KDD 2023] FedMultimodal
vimar-gu/MSINet
[CVPR2023] Twins Contrastive Search of Multi-Scale Interaction for Object Re-Identification
TmacMai/Multimodal-Information-Bottleneck
Multimodal Information Bottleneck: Learning Minimal Sufficient Unimodal and Multimodal Representations (MIB for multimodal sentiment analysis)
usc-sail/peft-ser
PEFT-SER: On the Use of Parameter Efficient Transfer Learning Approaches For Speech Emotion Recognition Using Pre-trained Speech Models (Accepted to 2023 ACII)
jbdel/modulated_fusion_transformer
Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition
TaoShi1998/MultiEMO-ACL2023
MultiEMO: An Attention-Based Correlation-Aware Multimodal Fusion Framework for Emotion Recognition in Conversations (ACL 2023)
EIHW/MuSe2022
TmacMai/multimodal-fusion
Multimodal Fusion, Multimodal Sentiment Analysis
ilucasgoncalves/AuxFormer
AuxFormer: Robust Approach to Audiovisual Emotion Recognition
Lamomal/s3prl_correlation
Self-Supervised Speech Pre-training and Representation Learning Toolkit.
mogvision/regbn
qiuchili/diasenti
Conversational Multimodal Emotion Recognition
isaacOnline/SpEAT
Official implementation of the paper "Pre-trained Speech Processing Models Contain Human-Like Biases that Propagate to Speech Emotion Recognition"