instruction-following
There are 41 repositories under instruction-following topic.
tatsu-lab/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
BradyFU/Awesome-Multimodal-Large-Language-Models
:sparkles::sparkles:Latest Advances on Multimodal Large Language Models
OptimalScale/LMFlow
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
zjunlp/LLMAgentPapers
Must-read Papers on LLM Agents.
tatsu-lab/alpaca_eval
An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.
zjunlp/KnowLM
An Open-sourced Knowledgable Large Language Model Framework.
yaodongC/awesome-instruction-dataset
A collection of open-source dataset to train instruction-following LLMs (ChatGPT,LLaMA,Alpaca)
tatsu-lab/alpaca_farm
A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.
SinclairCoder/Instruction-Tuning-Papers
Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).
VinAIResearch/PhoGPT
PhoGPT: Generative Pre-training for Vietnamese (2023)
Tebmer/Awesome-Knowledge-Distillation-of-LLMs
This repository collects papers for "A Survey on Knowledge Distillation of Large Language Models". We break down KD into Knowledge Elicitation and Distillation Algorithms, and explore the Skill & Vertical Distillation of LLMs.
kevinamiri/Instructgpt-prompts
A collection of ChatGPT and GPT-3.5 instruction-based prompts for generating and classifying text.
OSU-NLP-Group/MagicBrush
[NeurIPS'23] "MagicBrush: A Manually Annotated Dataset for Instruction-Guided Image Editing".
zjunlp/Mol-Instructions
[ICLR 2024] Mol-Instructions: A Large-Scale Biomolecular Instruction Dataset for Large Language Models
baaivision/EVE
[NeurIPS'24 Spotlight] EVE: Encoder-Free Vision-Language Models
bigcode-project/bigcodebench
BigCodeBench: Benchmarking Code Generation Towards AGI
YJiangcm/Lion
Code for "Lion: Adversarial Distillation of Proprietary Large Language Models (EMNLP 2023)"
A-baoYang/alpaca-7b-chinese
Finetune LLaMA-7B with Chinese instruction datasets
YangLing0818/EditWorld
EditWorld: Simulating World Dynamics for Instruction-Following Image Editing
PyThaiNLP/WangChanGLM
WangChanGLM 🐘 - The Multilingual Instruction-Following Model
YJiangcm/FollowBench
Code for "FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models (ACL 2024)"
zjr2000/Awesome-Multimodal-Chatbot
Awesome Multimodal Assistant is a curated list of multimodal chatbots/conversational assistants that utilize various modes of interaction, such as text, speech, images, and videos, to provide a seamless and versatile user experience.
forhaoliu/instructrl
Instruction Following Agents with Multimodal Transforemrs
LG-AI-EXAONE/KoMT-Bench
Official repository for KoMT-Bench built by LG AI Research
DreamerGPT/DreamerGPT
🌱 梦想家(DreamerGPT):中文大语言模型指令精调
gistvision/moca
Code and models of MOCA (Modular Object-Centric Approach) proposed in "Factorizing Perception and Policy for Interactive Instruction Following" (ICCV 2021). We address the task of long horizon instruction following with a modular architecture that decouples a task into visual perception and action policy prediction.
FudanDISC/ReForm-Eval
An benchmark for evaluating the capabilities of large vision-language models (LVLMs)
2toinf/IVM
[NeurIPS-2024] The offical Implementation of "Instruction-Guided Visual Masking"
tml-epfl/icl-alignment
Is In-Context Learning Sufficient for Instruction Following in LLMs?
ParthaPRay/LLM-Learning-Sources
This repo contains a list of channels and sources from where LLMs should be learned
Lichang-Chen/AlpaGasus
A better Alpaca Model Trained with Less Data (only 9k instructions of the original set)
lizhaoliu-Lec/CG-VLM
This is the official repo for Contrastive Vision-Language Alignment Makes Efficient Instruction Learner.
mchl-labs/stambecco
The home of Stambecco 🦌: Italian Instruction-following LLaMA Model
A-baoYang/instruction-finetune-datasets
Collect and maintain high quality instruction finetune datasets in different domain and languages. 搜集並維護高品質各專業領域及語言的指令微調資料集
aimonlabs/aimon-python-sdk
This repo hosts the Python SDK and related examples for AIMon, which is a proprietary, state-of-the-art system for detecting LLM quality issues such as Hallucinations. It can be used during offline evals, continuous monitoring or inline detection. We offer various model quality metrics that are fast, reliable and cost-effective.
declare-lab/RobustMIFT
[Arxiv 2024] Official Implementation of the paper: "Towards Robust Instruction Tuning on Multimodal Large Language Models"