instruction-following
There are 47 repositories under instruction-following topic.
tatsu-lab/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
BradyFU/Awesome-Multimodal-Large-Language-Models
:sparkles::sparkles:Latest Advances on Multimodal Large Language Models
OptimalScale/LMFlow
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
zjunlp/LLMAgentPapers
Must-read Papers on LLM Agents.
tatsu-lab/alpaca_eval
An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.
zjunlp/KnowLM
An Open-sourced Knowledgable Large Language Model Framework.
Tebmer/Awesome-Knowledge-Distillation-of-LLMs
This repository collects papers for "A Survey on Knowledge Distillation of Large Language Models". We break down KD into Knowledge Elicitation and Distillation Algorithms, and explore the Skill & Vertical Distillation of LLMs.
yaodongC/awesome-instruction-dataset
A collection of open-source dataset to train instruction-following LLMs (ChatGPT,LLaMA,Alpaca)
tatsu-lab/alpaca_farm
A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.
VinAIResearch/PhoGPT
PhoGPT: Generative Pre-training for Vietnamese (2023)
SinclairCoder/Instruction-Tuning-Papers
Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).
kevinamiri/Instructgpt-prompts
A collection of ChatGPT and GPT-3.5 instruction-based prompts for generating and classifying text.
bigcode-project/bigcodebench
[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
OSU-NLP-Group/MagicBrush
[NeurIPS'23] "MagicBrush: A Manually Annotated Dataset for Instruction-Guided Image Editing".
baaivision/EVE
EVE Series: Encoder-Free Vision-Language Models from BAAI
zjunlp/Mol-Instructions
[ICLR 2024] Mol-Instructions: A Large-Scale Biomolecular Instruction Dataset for Large Language Models
bagh2178/UniGoal
[CVPR 2025] UniGoal: Towards Universal Zero-shot Goal-oriented Navigation
YJiangcm/Lion
[EMNLP 2023] Lion: Adversarial Distillation of Proprietary Large Language Models
A-baoYang/alpaca-7b-chinese
Finetune LLaMA-7B with Chinese instruction datasets
YangLing0818/EditWorld
EditWorld: Simulating World Dynamics for Instruction-Following Image Editing
YJiangcm/FollowBench
[ACL 2024] FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models
PyThaiNLP/WangChanGLM
WangChanGLM 🐘 - The Multilingual Instruction-Following Model
zjr2000/Awesome-Multimodal-Chatbot
Awesome Multimodal Assistant is a curated list of multimodal chatbots/conversational assistants that utilize various modes of interaction, such as text, speech, images, and videos, to provide a seamless and versatile user experience.
LG-AI-EXAONE/KoMT-Bench
Official repository for KoMT-Bench built by LG AI Research
haoliuhl/instructrl
Instruction Following Agents with Multimodal Transforemrs
DreamerGPT/DreamerGPT
🌱 梦想家(DreamerGPT):中文大语言模型指令精调
FudanDISC/ReForm-Eval
An benchmark for evaluating the capabilities of large vision-language models (LVLMs)
gistvision/moca
Code and models of MOCA (Modular Object-Centric Approach) proposed in "Factorizing Perception and Policy for Interactive Instruction Following" (ICCV 2021). We address the task of long horizon instruction following with a modular architecture that decouples a task into visual perception and action policy prediction.
tamlhp/awesome-instruction-editing
Awesome Instruction Editing. Image and Media Editing with Human Instructions. Instruction-Guided Image and Media Editing.
2toinf/IVM
[NeurIPS-2024] The offical Implementation of "Instruction-Guided Visual Masking"
ParthaPRay/LLM-Learning-Sources
This repo contains a list of channels and sources from where LLMs should be learned
tml-epfl/icl-alignment
Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]
zjunlp/InstructCell
A Multi-Modal AI Copilot for Single-Cell Analysis with Instruction Following
Lichang-Chen/AlpaGasus
A better Alpaca Model Trained with Less Data (only 9k instructions of the original set)
lizhaoliu-Lec/CG-VLM
This is the official repo for Contrastive Vision-Language Alignment Makes Efficient Instruction Learner.
mchl-labs/stambecco
The home of Stambecco 🦌: Italian Instruction-following LLaMA Model