Pinned Repositories
RWKV-LM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
LLM_finetune
My finetune code for LLM (Llama)
LLaMA-Factory
Unify Efficient Fine-Tuning of 100+ LLMs
character_AI_open
Generate multi-round conversation roleplay data based on self-instruct and evol-instruct.
Orion
Orion-14B is a family of models includes a 14B foundation LLM, and a series of models: a chat model, a long context model, a quantized model, a RAG fine-tuned model, and an Agent fine-tuned model. Orion-14B 系列模型包括一个具有140亿参数的多语言基座大模型以及一系列相关的衍生模型,包括对话模型,长文本模型,量化模型,RAG微调模型,Agent微调模型等。
enbiwudi's Repositories
enbiwudi/LLM_finetune
My finetune code for LLM (Llama)