peft-fine-tuning-llm
There are 78 repositories under peft-fine-tuning-llm topic.
peremartra/Large-Language-Model-Notebooks-Course
Practical course about Large Language Models.
dvgodoy/FineTuningLLMs
Official repository of my book "A Hands-On Guide to Fine-Tuning LLMs with PyTorch and Hugging Face"
liuqidong07/MOELoRA-peft
[SIGIR'24] The official implementation code of MOELoRA.
nbasyl/DoRA
Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"
TUDB-Labs/MoE-PEFT
An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT
UCDvision/NOLA
Code for NOLA, an implementation of "nola: Compressing LoRA using Linear Combination of Random Basis"
ROIM1998/APT
[ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference
SkyuForever/CRE-LLM
CRE-LLM: A Domain-Specific Chinese Relation Extraction Framework with Fine-tuned Large Language Model
brown-palm/AntGPT
Official code implemtation of paper AntGPT: Can Large Language Models Help Long-term Action Anticipation from Videos?
misonsky/HiFT
memory-efficient fine-tuning; support 24G GPU memory fine-tuning 7B
PRITHIVSAKTHIUR/GALLO-3XL
High Quality Image Generation Model - Powered with NVIDIA A100
StarLight1212/LLM-and-Generative-Models-Community
AI Community Tutorial, including: LoRA/Qlora LLM fine-tuning, Training GPT-2 from scratch, Generative Model Architecture, Content safety and control implementation, Model distillation techniques, Dreambooth techniques, Transfer learning, etc for practice with real project!
wrmthorne/cycleformers
A Python library for efficient and flexible cycle-consistency training of transformer models via iteratie back-translation. Memory and compute efficient techniques such as PEFT adapter switching allow for 7.5x larger models to be trained on the same hardware.
DongmingShenDS/Mistral_From_Scratch
Mistral and Mixtral (MoE) from scratch
Hamid-Nasiri/EDoRA
EDoRA: Efficient Weight-Decomposed Low-Rank Adaptation via Singular Value Decomposition
sayan112207/Text2SQL
Fine-tune StarCoder2-3b for SQL tasks on limited resources with LORA. LORA reduces model size for faster training on smaller datasets. StarCoder2 is a family of code generation models (3B, 7B, and 15B), trained on 600+ programming languages from The Stack v2 and some natural language text such as Wikipedia, Arxiv, and GitHub issues.
aman-17/MediSOAP
FineTuning LLMs on conversational medical dataset.
RETR0-OS/ModelForge
A no-code toolkit to finetune LLMs on your local GPU—just upload data, pick a task, and deploy later. Perfect for hackathons or prototyping, with automatic hardware detection and a guided React interface.
yuki-2025/llama3-8b-fine-tuning-math
Fine-tuning Llama3 8b to generate JSON formats for arithmetic questions and process the output to perform calculations.
swastikmaiti/Llama-2-7B-Chat-PEFT
PEFT is a wonderful tool that enables training a very large model in a low resource environment. Quantization and PEFT will enable widespread adoption of LLM.
Yousra-Chahinez/llama-qlora-finetuning-arabic-sentiment-analysis
This repository contains a notebook for fine-tuning the meta-llama/Llama-3.2-3B-Instruct (or any other generative language models) model using Quantized LoRA (QLoRA) for sentiment classification on the Arabic HARD dataset.
AnanthaPadmanaban-KrishnaKumar/EffiLLaMA
Finetuning LLaMA 3.2-1B-Instruct model using qLoRA and LoRA quantization PEFT methods
erdemormann/kanarya-and-trendyol-classification-tests
Test results of Kanarya and Trendyol models with and without fine-tuning techniques on the Turkish tweet hate speech detection dataset.
himanshuvnm/Foundation-Model-Large-Language-Model-FM-LLM
This repository was commited under the action of executing important tasks on which modern Generative AI concepts are laid on. In particular, we focussed on three coding actions of Large Language Models. Extra and necessary details are given in the README.md file.
wuweilun/NTU-DLCV-2023
NTU Deep Learning for Computer Vision 2023 course
zeyadusf/Finetuning-LLMs
Finetuning Large Language Models
zeyadusf/topics-in-nlp-llm
In this repo I will share different topics on anything I want to know in nlp and llms
03chrisk/PEFT-T5-on-CNN-dailynews
Fine tuning the T5 model on the CNN daily-news dataset
akthammomani/Casual_Conversation_Chatbot
Build a Multi-turn Conversations Chit-Chat Bot
AnishJoshi13/Bash-Scripting-Assistant
A bash scripting assistant that helps you automate tasks. Powered by a streamlit chat interface, A finetuned nl2bash model generates bash code from natural language descriptions provided by the user
Arya920/Natural_Language_To_SQL_Queries
The task of this project is to Convert Natural Language to SQL Queries
eshan1347/GPT-NEO-LORA
A GPT-Neo model is fine tuned on a custom dataset using huggingface transformers package
gabe-zhang/paper2summary
LoRA fine-tuning scripts with Llama-3.2-1B-Instruct on scientific paper summarization
kconstable/LLM-fine-tuning
For this project, I fine-tuned two separate models for three tasks: document summarization, dialogue summarization and text classification
silvererudite/generative-ai
practical projects using LLM, VLM and Diffusion models