Pinned Repositories
awesome-emotion-recognition-in-conversations
A comprehensive reading list for Emotion Recognition in Conversations
awesome-sentiment-analysis
Reading list for Awesome Sentiment Analysis papers
conv-emotion
This repo contains implementation of different architectures for emotion recognition in conversations.
flan-alpaca
This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as Flan-T5.
instruct-eval
This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.
MELD
MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation
multimodal-deep-learning
This repository contains various models targetting multimodal representation learning, multimodal fusion for downstream tasks such as multimodal sentiment analysis.
nora
NORA: A Small Open-Sourced Generalist Vision Language Action Model for Embodied Tasks
tango
A family of diffusion models for text-to-audio generation.
TangoFlux
TangoFlux: Super Fast and Faithful Text to Audio Generation with Flow Matching
Deep Cognition and Language Research (DeCLaRe) Lab's Repositories
declare-lab/tango
A family of diffusion models for text-to-audio generation.
declare-lab/MELD
MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation
declare-lab/TangoFlux
TangoFlux: Super Fast and Faithful Text to Audio Generation with Flow Matching
declare-lab/instruct-eval
This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.
declare-lab/nora
NORA: A Small Open-Sourced Generalist Vision Language Action Model for Embodied Tasks
declare-lab/jamify
JAM: A Tiny Flow-based Song Generator with Fine-grained Controllability and Aesthetic Alignment
declare-lab/LLM-PuzzleTest
This repository is maintained to release dataset and models for multimodal puzzle reasoning.
declare-lab/Emma-X
Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning
declare-lab/trust-align
Codes and datasets for the paper Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Refuse
declare-lab/HyperTTS
declare-lab/della
DELLA-Merging: Reducing Interference in Model Merging through Magnitude-Based Sampling
declare-lab/MM-Align
[EMNLP 2022] This repository contains the official implementation of the paper "MM-Align: Learning Optimal Transport-based Alignment Dynamics for Fast and Accurate Inference on Missing Modality Sequences"
declare-lab/MM-InstructEval
This repository contains code to evaluate various multimodal large language models using different instructions across multiple multimodal content comprehension tasks.
declare-lab/resta
Restore safety in fine-tuned language models through task arithmetic
declare-lab/ferret
Ferret: Faster and Effective Automated Red Teaming with Reward-Based Scoring Technique
declare-lab/adapter-mix
declare-lab/safety-arithmetic
declare-lab/Sealing
[NAACL 2024] Official Implementation of paper "Self-Adaptive Sampling for Efficient Video Question Answering on Image--Text Models"
declare-lab/LLM-ReasoningTest
Evaluating LLMs' Mathematical and Coding Competency through Ontology-guided Interventions
declare-lab/Auto-Scaling
[Arxiv 2024] Official Implementation of the paper: "Towards Robust Instruction Tuning on Multimodal Large Language Models"
declare-lab/OffTopicEval
declare-lab/PathFinder-PRM
This repository contains the official implementation of Error Typing for Smarter Rewards: Improving Process Reward Models with Error-Aware Hierarchical Supervision.
declare-lab/PromptDistill
declare-lab/PROVE
declare-lab/darwin
declare-lab/dialogxpert
Codebase for ProactiveAI in conversations
declare-lab/KAIROS
KAIROS: An LLM Eval Technique to Evaluate Multi-Agent Social Interactions
declare-lab/declare-lab.github.io
declare-lab/.github
declare-lab/vlprm
This repository contains the official implementation of Training Vision-Language Process Reward Models for Test-Time Scaling in Multimodal Reasoning: Key Insights and Lessons Learned