parameter-efficient-fine-tuning
There are 40 repositories under parameter-efficient-fine-tuning topic.
NVlabs/DoRA
[ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation
synbol/Awesome-Parameter-Efficient-Transfer-Learning
Collection of awesome parameter-efficient fine-tuning resources.
Paranioar/Awesome_Matching_Pretraining_Transfering
The Paper List of Large Multi-Modality Model (Perception, Generation, Unification), Parameter-Efficient Finetuning, Vision-Language Pretraining, Conventional Image-Text Matching for Preliminary Insight.
liuqidong07/MOELoRA-peft
[SIGIR'24] The official implementation code of MOELoRA.
Chongjie-Si/Subspace-Tuning
A generalized framework for subspace tuning methods in parameter efficient fine-tuning.
ZhengxiangShi/DePT
[ICLR 2024] This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"
architkaila/Fine-Tuning-LLMs-for-Medical-Entity-Extraction
Exploring the potential of fine-tuning Large Language Models (LLMs) like Llama2 and StableLM for medical entity extraction. This project focuses on adapting these models using PEFT, Adapter V2, and LoRA techniques to efficiently and accurately extract drug names and adverse side-effects from pharmaceutical texts
ziplab/SPT
[ICCV 2023 oral] This is the official repository for our paper: ''Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning''.
Paranioar/UniPT
[CVPR2024] The code of "UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and Memory"
miccunifi/KDPL
[ECCV 2024] - Improving Zero-shot Generalization of Learned Prompts via Unsupervised Knowledge Distillation
ltlhuuu/PSEC
[ICLR 2025] The offical implementation of "PSEC: Skill Expansion and Composition in Parameter Space", a new framework designed to facilitate efficient and flexible skill expansion and composition, iteratively evolve the agents' capabilities and efficiently address new challenges
astra-vision/FAMix
[CVPR 2024] Official repository of "A Simple Recipe for Language-guided Domain Generalized Segmentation"
iboing/CorDA
CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for task-aware parameter-efficient fine-tuning(NeurIPS 2024)
OSU-MLB/ViT_PEFT_Vision
[CVPR'25 (Highlight)] Lessons and Insights from a Unifying Study of Parameter-Efficient Fine-Tuning (PEFT) in Visual Recognition
umbertocappellazzo/PETL_AST
This is the official repository of the papers "Parameter-Efficient Transfer Learning of Audio Spectrogram Transformers" and "Efficient Fine-tuning of Audio Spectrogram Transformers via Soft Mixture of Adapters".
auniquesun/PPT
[ICRA 2024] Official Implementation of the paper "Parameter-efficient Prompt Learning for 3D Point Cloud Understanding"
fredzzhang/atlas
[NeurIPS'24] Official PyTorch implementation for paper "Knowledge Composition using Task Vectors with Learned Anisotropic Scaling"
astra-vision/ProLIP
An extremely simple method for validation-free few-shot adaptation of CLIP-like VLMs that is robust to the learning rate.
PurdueDigitalTwin/MACP
[WACV 2024] MACP: Efficient Model Adaptation for Cooperative Perception.
CASE-Lab-UMD/Router-Tuning-Mixture-of-Depths
The open-source Mixture of Depths code and the official implementation of the paper "Router-Tuning: A Simple and Effective Approach for Enabling Dynamic Depth in Transformers. (EMNLP 2025)"
rochitasundar/Generative-AI-with-Large-Language-Models
This repository contains the lab work for Coursera course on "Generative AI with Large Language Models".
alinourian/Fine-tuning-Mistral-7b-QA
Fine tuning Mistral-7b with PEFT(Parameter Efficient Fine-Tuning) and LoRA(Low-Rank Adaptation) on Puffin Dataset(multi-turn conversations between GPT-4 and real humans)
GeorgeVern/lmcor
Code for the EACL 2024 paper: "Small Language Models Improve Giants by Rewriting Their Outputs"
Raman1121/FairTune
A framework to optimize Parameter-Efficient Fine-Tuning for Fairness in Medical Image Analysis
Hamid-Nasiri/EDoRA
EDoRA: Efficient Weight-Decomposed Low-Rank Adaptation via Singular Value Decomposition
fork123aniket/LLM-RAG-powered-QA-App
A Production-Ready, Scalable RAG-powered LLM-based Context-Aware QA App
Paranioar/SHERL
[ECCV2024] The code of "SHERL: Synthesizing High Accuracy and Efficient Memory for Resource-Limited Transfer Learning"
ssfgunner/SNELL
[NeurIPS 2024] This is the official repository for our paper: ''Expanding Sparse Tuning for Low Memory Usage''.
iurada/talos-task-arithmetic
Official repository of our work "Efficient Model Editing with Task-Localized Sparse Fine-tuning" accepted at ICLR 2025
Andy-LZH/peft4clip
Parameter Efficient Fine-Tuning for CLIP
KayvanShah1/UniFAQ
Fine-Tuned LLM-Based FAQ Generation for University Admissions: A project involving the fine-tuning of state-of-the-art language models, including LLaMA-3 8b, LLaMA-2 7b, Mistral 7b, T5, and BART, leveraging QLoRA PEFT.
Md-Emon-Hasan/Fine-Tuning
End-to-end fine-tuning of Hugging Face models using LoRA, QLoRA, quantization, and PEFT techniques. Optimized for low-memory with efficient model deployment
RuvenGuna94/Dialogue-Summary-PEFT-Fine-Tuning
This notebook fine-tunes the FLAN-T5 model for dialogue summarization, comparing full fine-tuning with Parameter-Efficient Fine-Tuning (PEFT). It evaluates performance using ROUGE metrics, demonstrating PEFT's efficiency while achieving competitive results.
NafisSaleh/TableLLM
This repository is related to the use of large language models (LLMs) on tabular data.