peft

There are 135 repositories under peft topic.

  • LLaMA-Factory

    hiyouga/LLaMA-Factory

    Unify Efficient Fine-Tuning of 100+ LLMs

    Language:Python24k1643.8k2.9k
  • yangjianxin1/Firefly

    Firefly: 大模型训练工具,支持训练Qwen2、Yi1.5、Phi-3、Llama3、Gemma、MiniCPM、Yi、Deepseek、Orion、Xverse、Mixtral-8x7B、Zephyr、Mistral、Baichuan2、Llma2、Llama、Qwen、Baichuan、ChatGLM2、InternLM、Ziya2、Vicuna、Bloom等大模型

    Language:Python5k51247461
  • mymusise/ChatGLM-Tuning

    基于ChatGLM-6B + LoRA的Fintune方案

    Language:Python3.7k31247443
  • hiyouga/ChatGLM-Efficient-Tuning

    Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调

    Language:Python3.6k32374462
  • InternLM/xtuner

    An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)

    Language:Python3k30380232
  • stochasticai/xTuring

    Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6

    Language:Python2.5k32101197
  • lxe/simple-llm-finetuner

    Simple UI for LLM Model Finetuning

    Language:Jupyter Notebook2k2049134
  • modelscope/swift

    ms-swift: Use PEFT or Full-parameter to finetune 250+ LLMs or 35+ MLLMs. (Qwen2, GLM4, Internlm2, Yi, Llama3, Llava, Deepseek, Baichuan2...)

    Language:Python1.9k18506182
  • ashishpatel26/LLM-Finetuning

    LLM Finetuning with peft

    Language:Jupyter Notebook1.7k253473
  • zyds/transformers-code

    手把手带你实战 Huggingface Transformers 课程视频同步更新在B站与YouTube

    Language:Jupyter Notebook1.3k139203
  • LLaMA-LoRA-Tuner

    zetavg/LLaMA-LoRA-Tuner

    UI tool for fine-tuning and testing your own LoRA models base on LLaMA, GPT-J and more. One-click run on Google Colab. + A Gradio ChatGPT-like Chat UI to demonstrate your language models.

    Language:Python42973280
  • Guitaricet/relora

    Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates

    Language:Jupyter Notebook41081634
  • X-LANCE/SLAM-LLM

    Speech, Language, Audio, Music Processing with Large Language Model

    Language:Python365191429
  • mindspore-courses/step_into_llm

    MindSpore online courses: Step into LLM

    Language:Python35492079
  • Joyce94/LLM-RLHF-Tuning

    LLM Tuning with PEFT (SFT+RM+PPO+DPO with LoRA)

    Language:Python3282216
  • iamarunbrahma/finetuned-qlora-falcon7b-medical

    Finetuning of Falcon-7B LLM using QLoRA on Mental Health Conversational Dataset

    Language:Jupyter Notebook2174226
  • jackaduma/Vicuna-LoRA-RLHF-PyTorch

    A full pipeline to finetune Vicuna LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the Vicuna architecture. Basically ChatGPT but with Vicuna

    Language:Python20081518
  • TUDB-Labs/mLoRA

    Provide Efficient LLM Fine-Tune via Multi-LoRA Optimization

    Language:Python19534230
  • km1994/llms_paper

    该仓库主要记录 LLMs 算法工程师相关的顶会论文研读笔记(多模态、PEFT、小样本QA问答、RAG、LMMs可解释性、Agents、CoT)

  • jianzhnie/open-chatgpt

    The open source implementation of ChatGPT, Alpaca, Vicuna and RLHF Pipeline. 从0开始实现一个ChatGPT.

    Language:Python16411630
  • calpt/awesome-adapter-resources

    Collection of Tools and Papers related to Adapters / Parameter-Efficient Transfer Learning/ Fine-Tuning

    Language:Python159417
  • jasonvanf/llama-trl

    LLaMA-TRL: Fine-tuning LLaMA with PPO and LoRA

    Language:Python1592422
  • jackaduma/ChatGLM-LoRA-RLHF-PyTorch

    A full pipeline to finetune ChatGLM LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the ChatGLM architecture. Basically ChatGPT but with ChatGLM

    Language:Python1195210
  • liuqidong07/MOELoRA-peft

    [SIGIR'24] The official implementation code of MOELoRA.

    Language:Python9431510
  • ZhengxiangShi/DePT

    [ICLR 2024] This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"

    Language:Python852814
  • kamalkraj/e5-mistral-7b-instruct

    Finetune mistral-7b-instruct for sentence embeddings

    Language:Python6131013
  • NisaarAgharia/Indian-LawyerGPT

    Fine-Tuning Falcon-7B, LLAMA 2 with QLoRA to create an advanced AI model with a profound understanding of the Indian legal context.

    Language:Jupyter Notebook544724
  • ziplab/SPT

    [ICCV 2023 oral] This is the official repository for our paper: ''Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning''.

    Language:Python54432
  • jackaduma/Alpaca-LoRA-RLHF-PyTorch

    A full pipeline to finetune Alpaca LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the Alpaca architecture. Basically ChatGPT but with Alpaca

    Language:Python52516
  • zjohn77/lightning-mlflow-hf

    Use QLoRA to tune LLM in PyTorch-Lightning w/ Huggingface + MLflow

    Language:Python47217
  • Reason-Wang/flan-alpaca-lora

    This repository contains the code to train flan t5 with alpaca instructions and low rank adaptation.

    Language:Python45358
  • NOLA

    UCDvision/NOLA

    Code for NOLA, an implementation of "nola: Compressing LoRA using Linear Combination of Random Basis"

    Language:Python43311
  • sharma-n/DAG_Scheduling

    HEFT, randomHEFT and IPEFT algorithms for static list DAG Scheduling

    Language:Jupyter Notebook39107
  • Baijiong-Lin/LoRA-Torch

    PyTorch Reimplementation of LoRA

    Language:Python35251
  • adithya-s-k/CompanionLLM

    CompanionLLM - A framework to finetune LLMs to be your own sentient conversational companion

    Language:Jupyter Notebook34195
  • neuralwork/instruct-finetune-mistral

    Fine-tune Mistral 7B to generate fashion style suggestions

    Language:Python29016