Pinned Repositories
evalplus
Rigourous evaluation of LLM-synthesized code - NeurIPS 2023 & COLM 2024
LLaMA-Factory
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
Orion
Orion-14B is a family of models includes a 14B foundation LLM, and a series of models: a chat model, a long context model, a quantized model, a RAG fine-tuned model, and an Agent fine-tuned model. Orion-14B 系列模型包括一个具有140亿参数的多语言基座大模型以及一系列相关的衍生模型,包括对话模型,长文本模型,量化模型,RAG微调模型,Agent微调模型等。
Qwen
The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
ConFEDE
Instruction-Fusion
Advancing Prompt Evolution through Hybridization
Mulco
SIFiD
SoftFiNE
TAG
XpastaX's Repositories
XpastaX/ConFEDE
XpastaX/SoftFiNE
XpastaX/Instruction-Fusion
Advancing Prompt Evolution through Hybridization
XpastaX/Mulco
XpastaX/SIFiD
XpastaX/TAG
XpastaX/TaCIE
TaCIE: Enhancing Instruction Comprehension in Large Language Models through Task-Centred Instruction Evolution