Pinned Repositories
awesome-emotion-recognition-in-conversations
A comprehensive reading list for Emotion Recognition in Conversations
awesome-sentiment-analysis
Reading list for Awesome Sentiment Analysis papers
conv-emotion
This repo contains implementation of different architectures for emotion recognition in conversations.
flan-alpaca
This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as Flan-T5.
instruct-eval
This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.
MELD
MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation
multimodal-deep-learning
This repository contains various models targetting multimodal representation learning, multimodal fusion for downstream tasks such as multimodal sentiment analysis.
nora
NORA: A Small Open-Sourced Generalist Vision Language Action Model for Embodied Tasks
tango
A family of diffusion models for text-to-audio generation.
TangoFlux
TangoFlux: Super Fast and Faithful Text to Audio Generation with Flow Matching
Deep Cognition and Language Research (DeCLaRe) Lab's Repositories
declare-lab/multimodal-deep-learning
This repository contains various models targetting multimodal representation learning, multimodal fusion for downstream tasks such as multimodal sentiment analysis.
declare-lab/awesome-sentiment-analysis
Reading list for Awesome Sentiment Analysis papers
declare-lab/flan-alpaca
This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as Flan-T5.
declare-lab/awesome-emotion-recognition-in-conversations
A comprehensive reading list for Emotion Recognition in Conversations
declare-lab/MISA
MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis
declare-lab/Multimodal-Infomax
This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis, accepted at EMNLP 2021.
declare-lab/RelationPrompt
This repository implements our ACL Findings 2022 research paper RelationPrompt: Leveraging Prompts to Generate Synthetic Data for Zero-Shot Relation Triplet Extraction. The goal of Zero-Shot Relation Triplet Extraction (ZeroRTE) is to extract relation triplets of the format (head entity, tail entity, relation), despite not having annotated data for the test relation labels.
declare-lab/dialogue-understanding
This repository contains PyTorch implementation for the baseline models from the paper Utterance-level Dialogue Understanding: An Empirical Study
declare-lab/contextual-utterance-level-multimodal-sentiment-analysis
Context-Dependent Sentiment Analysis in User-Generated Videos
declare-lab/flacuna
Flacuna was developed by fine-tuning Vicuna on Flan-mini, a comprehensive instruction collection encompassing various tasks. Vicuna is already an excellent writing assistant, and the intention behind Flacuna was to enhance Vicuna's problem-solving capabilities. To achieve this, we curated a dedicated instruction dataset called Flan-mini.
declare-lab/CASCADE
This repo contains code to detect sarcasm from text in discussion forum using deep learning
declare-lab/BBFN
This repository contains the implementation of the paper -- Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal Sentiment Analysis
declare-lab/CICERO
The purpose of this repository is to introduce new dialogue-level commonsense inference datasets and tasks. We chose dialogues as the data source because dialogues are known to be complex and rich in commonsense.
declare-lab/kingdom
Domain Adaptation using External Knowledge for Sentiment Analysis
declare-lab/hfusion
Multimodal sentiment analysis using hierarchical fusion with context modeling
declare-lab/MIME
This repository contains PyTorch implementations of the models from the paper An Empirical Study MIME: MIMicking Emotions for Empathetic Response Generation.
declare-lab/speech-adapters
Codes and datasets for our ICASSP2023 paper, Evaluating parameter-efficient transfer learning approaches on SURE benchmark for speech understanding
declare-lab/sentence-ordering
This repository contains the PyTorch implementation of the paper STaCK: Sentence Ordering with Temporal Commonsense Knowledge appearing at EMNLP 2021.
declare-lab/identifiable-transformers
declare-lab/ASTE-RL
This repository contains the source codes for the paper: "Aspect Sentiment Triplet Extraction using Reinforcement Learning" published at CIKM 2021.
declare-lab/M2H2-dataset
This repository contains the dataset and baselines explained in the paper: M2H2: A Multimodal Multiparty Hindi Dataset For HumorRecognition in Conversations
declare-lab/WikiDes
A Wikipedia-based summarization dataset
declare-lab/SAT
Code for the EMNLP 2022 Findings short paper "SAT: Improving Semi-Supervised Text Classification with Simple Instance-Adaptive Self-Training"
declare-lab/domadapter
Code for EACL'23 paper "Udapter: Efficient Domain Adaptation Using Adapters"
declare-lab/LG-VQA
declare-lab/SANCL
[COLING 2022] This repository contains the code of the paper SANCL: Multimodal Review Helpfulness Prediction with Selective Attention and Natural Contrastive Learning.
declare-lab/segue
Codes and Checkpoints of the Interspeech paper "Sentence Embedder Guided Utterance Encoder (SEGUE) for Spoken Language Understanding"
declare-lab/DPR
Dense Passage Retriever - is a set of tools and models for open domain Q&A task.
declare-lab/Video2Music
Video2Music: Suitable Music Generation from Videos using an Affective Multimodal Transformer model
declare-lab/mustango
Mustango: Toward Controllable Text-to-Music Generation