lora

There are 2313 repositories under lora topic.

  • annotated_deep_learning_paper_implementations

    labmlai/annotated_deep_learning_paper_implementations

    🧑‍🏫 60+ Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠

    Language:Python63.2k4831376.4k
  • LLaMA-Factory

    hiyouga/LLaMA-Factory

    Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)

    Language:Python58.3k2907.4k7.2k
  • unsloth

    unslothai/unsloth

    Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, Qwen3, Llama 4, DeepSeek-R1, Gemma 3, TTS 2x faster with 70% less VRAM.

    Language:Python45.5k2622.5k3.7k
  • datawhalechina/self-llm

    《开源大模型食用指南》针对**宝宝量身打造的基于Linux环境快速微调(全参数/Lora)、部署国内外开源大模型(LLM)/多模态大模型(MLLM)教程

    Language:Jupyter Notebook24.3k1482572.4k
  • huggingface/peft

    🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.

    Language:Python19.6k1121.3k2k
  • Chinese-LLaMA-Alpaca

    ymcui/Chinese-LLaMA-Alpaca

    中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)

    Language:Python18.9k1827321.9k
  • camenduru/stable-diffusion-webui-colab

    stable diffusion webui colab

    Language:Jupyter Notebook15.9k1973622.7k
  • microsoft/LoRA

    Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"

    Language:Python12.7k75118838
  • modelscope/ms-swift

    Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 500+ LLMs (Qwen3, Qwen3-MoE, Llama4, GLM4.5, InternLM3, DeepSeek-R1, ...) and 200+ MLLMs (Qwen2.5-VL, Qwen2.5-Omni, InternVL3.5, Ovis2.5, Llava, GLM4v, Phi4, ...) (AAAI 2025).

    Language:Python9.9k443.3k870
  • LianjiaTech/BELLE

    BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)

    Language:HTML8.2k105443768
  • cloneofsimo/lora

    Using Low-rank adaptation to quickly fine-tune diffusion models.

    Language:Jupyter Notebook7.4k55143490
  • ART

    OpenPipe/ART

    Agent Reinforcement Trainer: train multi-step agents for real-world tasks using GRPO. Give your agents on-the-job training. Reinforcement learning for Qwen2.5, Qwen3, Llama, and more!

    Language:Python7.2k527
  • yangjianxin1/Firefly

    Firefly: 大模型训练工具,支持训练Qwen2.5、Qwen2、Yi1.5、Phi-3、Llama3、Gemma、MiniCPM、Yi、Deepseek、Orion、Xverse、Mixtral-8x7B、Zephyr、Mistral、Baichuan2、Llma2、Llama、Qwen、Baichuan、ChatGLM2、InternLM、Ziya2、Vicuna、Bloom等大模型

    Language:Python6.5k59286586
  • lyogavin/airllm

    AirLLM 70B inference with single 4GB GPU

    Language:Jupyter Notebook5.9k131190460
  • Akegarasu/lora-scripts

    SD-Trainer. LoRA & Dreambooth training scripts & GUI use kohya-ss's trainer, for diffusion model.

    Language:Python5.7k32577650
  • meshtastic/firmware

    The official firmware for Meshtastic, an open-source, off-grid mesh communication system.

    Language:C++5.5k1372.2k1.5k
  • ExpressLRS/ExpressLRS

    ESP32/ESP8285-based High-Performance Radio Link for RC applications

    Language:C++4.4k1058721.2k
  • transformerlab/transformerlab-app

    Open Source Application for Advanced LLM + Diffusion Engineering: interact, train, fine-tune, and evaluate large language models on your own computer.

    Language:TypeScript4.3k19246410
  • OpenMQTTGateway

    1technophile/OpenMQTTGateway

    MQTT gateway for ESP8266 or ESP32 with bidirectional 433mhz/315mhz/868mhz, Infrared communications, BLE, Bluetooth, beacons detection, mi flora, mi jia, LYWSD02, LYWSD03MMC, Mi Scale, TPMS, BBQ thermometer compatibility & LoRa.

    Language:C++3.9k154936845
  • mymusise/ChatGLM-Tuning

    基于ChatGLM-6B + LoRA的Fintune方案

    Language:Python3.8k33250443
  • hiyouga/ChatGLM-Efficient-Tuning

    Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调

    Language:Python3.7k32373477
  • predibase/lorax

    Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs

    Language:Python3.4k34264263
  • agentheroes/agentheroes

    Generate, animate and schedule your AI characters 🤖

    Language:TypeScript3.4k121273
  • wenge-research/YAYI

    雅意大模型:为客户打造安全可靠的专属大模型,基于大规模中英文多领域指令数据训练的 LlaMA 2 & BLOOM 系列模型,由中科闻歌算法团队研发。(Repo for YaYi Chinese LLMs based on LlaMA2 & BLOOM)

    Language:Python3.2k121244
  • markqvist/Reticulum

    The cryptography-based networking stack for building unstoppable networks with LoRa, Packet Radio, WiFi and everything in between.

    Language:Python3.1k80130204
  • nunchaku-tech/nunchaku

    [ICLR2025 Spotlight] SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models

    Language:Python3k34509170
  • PhoebusSi/Alpaca-CoT

    We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts to initiate any meaningful PR on this repo and integrate as many LLM related technologies as possible. 我们打造了方便研究人员上手和使用大模型等微调平台,我们欢迎开源爱好者发起任何有意义的pr!

    Language:Jupyter Notebook2.8k34101253
  • adapter-hub/adapters

    A Unified Library for Parameter-Efficient and Modular Transfer Learning

    Language:Python2.8k27402370
  • liucongg/ChatGLM-Finetuning

    基于ChatGLM-6B、ChatGLM2-6B、ChatGLM3-6B模型,进行下游具体任务微调,涉及Freeze、Lora、P-tuning、全参微调等

    Language:Python2.8k13152315
  • PJON

    gioblu/PJON

    PJON (Padded Jittering Operative Network) is an experimental, arduino-compatible, multi-master, multi-media network protocol.

    Language:C++2.8k114265245
  • dvlab-research/LongLoRA

    Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)

    Language:Python2.7k13174289
  • stochasticai/xTuring

    Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6

    Language:Python2.7k34102205
  • ashishpatel26/LLM-Finetuning

    LLM Finetuning with peft

    Language:Jupyter Notebook2.6k374686
  • absmach/supermq

    Event-driven Infrastructure for Modern Cloud

    Language:Go2.5k1011.1k675
  • OneTrainer

    Nerogar/OneTrainer

    OneTrainer is a one-stop solution for all your stable diffusion training needs.

    Language:Python2.5k27441228
  • MemTensor/MemOS

    MemOS (Preview) | Intelligence Begins with Memory

    Language:Python2.5k215