Pinned Repositories
Cherry_LLM
[NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other models
DEBATunE
[ACL'24] Can LLMs Speak For Diverse People? Tuning LLMs via Debate to Generate Controllable Controversial Statements
DisCL
Official repo for DisCL
HallusionBench
[CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models
mctune
[ACL'24] Multi-Objective Linguistic Control of Large Language Models
Mosaic-IT
Mosaic IT: Enhancing Instruction Tuning with Data Mosaics
Reflection_Tuning
[ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning
RuleR
RuleR: Improving LLM Controllability by Rule-based Data Recycling
Superfiltering
[ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning
Tianyi Lab @ UMD's Repositories
tianyi-lab/Reflection_Tuning
[ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning
tianyi-lab/Cherry_LLM
[NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other models
tianyi-lab/HallusionBench
[CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models
tianyi-lab/Superfiltering
[ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning
tianyi-lab/Mosaic-IT
Mosaic IT: Enhancing Instruction Tuning with Data Mosaics
tianyi-lab/DEBATunE
[ACL'24] Can LLMs Speak For Diverse People? Tuning LLMs via Debate to Generate Controllable Controversial Statements
tianyi-lab/RuleR
RuleR: Improving LLM Controllability by Rule-based Data Recycling
tianyi-lab/mctune
[ACL'24] Multi-Objective Linguistic Control of Large Language Models
tianyi-lab/DisCL
Official repo for DisCL