MingLiiii
Second-year Ph.D. student at UMD. I am happy to discuss and collaborate!
Univerisity of MarylandU.S. Maryland
Pinned Repositories
Efficient-LLMs-Survey
[TMLR 2024] Efficient Large Language Models: A Survey
Layer_Gradient
What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective
MingLiiii.github.io
alpaca_eval
An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.
Cherry_LLM
[NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other models
DEBATunE
[ACL'24] Can LLMs Speak For Diverse People? Tuning LLMs via Debate to Generate Controllable Controversial Statements
Mosaic-IT
Mosaic IT: Enhancing Instruction Tuning with Data Mosaics
Reflection_Tuning
[ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning
RuleR
RuleR: Improving LLM Controllability by Rule-based Data Recycling
Superfiltering
[ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning
MingLiiii's Repositories
MingLiiii/Layer_Gradient
What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective
MingLiiii/MingLiiii.github.io