lora

There are 1793 repositories under lora topic.

  • annotated_deep_learning_paper_implementations

    labmlai/annotated_deep_learning_paper_implementations

    🧑‍🏫 60+ Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠

    Language:Python57.7k4611325.9k
  • LLaMA-Factory

    hiyouga/LLaMA-Factory

    Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)

    Language:Python37.3k2195.6k4.6k
  • unsloth

    unslothai/unsloth

    Finetune Llama 3.3, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 70% less memory

    Language:Python20k1351.2k1.4k
  • Chinese-LLaMA-Alpaca

    ymcui/Chinese-LLaMA-Alpaca

    中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)

    Language:Python18.6k1847321.9k
  • huggingface/peft

    🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.

    Language:Python16.9k1121.1k1.7k
  • camenduru/stable-diffusion-webui-colab

    stable diffusion webui colab

    Language:Jupyter Notebook15.7k1953562.6k
  • microsoft/LoRA

    Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"

    Language:Python11k70108698
  • datawhalechina/self-llm

    《开源大模型食用指南》针对**宝宝量身打造的基于Linux环境快速微调(全参数/Lora)、部署国内外开源大模型(LLM)/多模态大模型(MLLM)教程

    Language:Jupyter Notebook10.7k731901.2k
  • LianjiaTech/BELLE

    BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)

    Language:HTML8k108442765
  • cloneofsimo/lora

    Using Low-rank adaptation to quickly fine-tune diffusion models.

    Language:Jupyter Notebook7.1k59138487
  • yangjianxin1/Firefly

    Firefly: 大模型训练工具,支持训练Qwen2.5、Qwen2、Yi1.5、Phi-3、Llama3、Gemma、MiniCPM、Yi、Deepseek、Orion、Xverse、Mixtral-8x7B、Zephyr、Mistral、Baichuan2、Llma2、Llama、Qwen、Baichuan、ChatGLM2、InternLM、Ziya2、Vicuna、Bloom等大模型

    Language:Python6k54281533
  • lyogavin/airllm

    AirLLM 70B inference with single 4GB GPU

    Language:Jupyter Notebook5.5k129185440
  • modelscope/ms-swift

    Use PEFT or Full-parameter to finetune 400+ LLMs (Qwen2.5, Llama3.2, GLM4, Internlm2.5, Yi1.5, Mistral, Baichuan2, DeepSeek, ...) or 150+ MLLMs (Qwen2-VL, Qwen2-Audio, Llama3.2-Vision, Llava, InternVL2.5, MiniCPM-V-2.6, GLM4v, Xcomposer2.5, Yi-VL, DeepSeek-VL2, Phi3.5-Vision, GOT-OCR2, ...).

    Language:Python4.9k231.5k429
  • Akegarasu/lora-scripts

    SD-Trainer. LoRA & Dreambooth training scripts & GUI use kohya-ss's trainer, for diffusion model.

    Language:Python4.8k30528584
  • meshtastic/firmware

    Meshtastic device firmware

    Language:C++3.8k1251.9k984
  • ExpressLRS/ExpressLRS

    ESP32/ESP8285-based High-Performance Radio Link for RC applications

    Language:C++3.8k98792981
  • mymusise/ChatGLM-Tuning

    基于ChatGLM-6B + LoRA的Fintune方案

    Language:Python3.8k32249441
  • hiyouga/ChatGLM-Efficient-Tuning

    Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调

    Language:Python3.7k32374473
  • OpenMQTTGateway

    1technophile/OpenMQTTGateway

    MQTT gateway for ESP8266 or ESP32 with bidirectional 433mhz/315mhz/868mhz, Infrared communications, BLE, Bluetooth, beacons detection, mi flora, mi jia, LYWSD02, LYWSD03MMC, Mi Scale, TPMS, BBQ thermometer compatibility & LoRa.

    Language:C++3.7k154892806
  • wenge-research/YAYI

    雅意大模型:为客户打造安全可靠的专属大模型,基于大规模中英文多领域指令数据训练的 LlaMA 2 & BLOOM 系列模型,由中科闻歌算法团队研发。(Repo for YaYi Chinese LLMs based on LlaMA2 & BLOOM)

    Language:Python3.3k121144
  • liucongg/ChatGLM-Finetuning

    基于ChatGLM-6B、ChatGLM2-6B、ChatGLM3-6B模型,进行下游具体任务微调,涉及Freeze、Lora、P-tuning、全参微调等

    Language:Python2.7k13150301
  • dvlab-research/LongLoRA

    Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)

    Language:Python2.7k13173278
  • PhoebusSi/Alpaca-CoT

    We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts to initiate any meaningful PR on this repo and integrate as many LLM related technologies as possible. 我们打造了方便研究人员上手和使用大模型等微调平台,我们欢迎开源爱好者发起任何有意义的pr!

    Language:Jupyter Notebook2.7k35100249
  • stochasticai/xTuring

    Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6

    Language:Python2.6k33102207
  • adapter-hub/adapters

    A Unified Library for Parameter-Efficient and Modular Transfer Learning

    Language:Jupyter Notebook2.6k29396355
  • absmach/supermq

    Event-driven Infrastructure for Modern Cloud

    Language:Go2.5k104965674
  • predibase/lorax

    Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs

    Language:Python2.3k33254149
  • ashishpatel26/LLM-Finetuning

    LLM Finetuning with peft

    Language:Jupyter Notebook2.3k333622
  • markqvist/Reticulum

    The cryptography-based networking stack for building unstoppable networks with LoRa, Packet Radio, WiFi and everything in between.

    Language:Python2.2k68111140
  • chenking2020/FindTheChatGPTer

    ChatGPT爆火,开启了通往AGI的关键一步,本项目旨在汇总那些ChatGPT的开源平替们,包括文本大模型、多模态大模型等,为大家提供一些便利

  • OneTrainer

    Nerogar/OneTrainer

    OneTrainer is a one-stop solution for all your stable diffusion training needs.

    Language:Python1.9k27364161
  • cyberman54/ESP32-Paxcounter

    Wifi & BLE driven passenger flow metering with cheap ESP32 boards

    Language:C++1.8k65384411
  • onediff

    siliconflow/onediff

    OneDiff: An out-of-the-box acceleration library for diffusion models.

    Language:Jupyter Notebook1.8k39466110
  • ssbuild/chatglm_finetuning

    chatglm 6b finetuning and alpaca finetuning

    Language:Python1.5k20246177
  • brocaar/chirpstack-network-server

    ChirpStack Network Server is an open-source LoRaWAN network-server.

    Language:Go1.5k149478548
  • markqvist/NomadNet

    Communicate Freely

    Language:Python1.3k303755