parameter-efficient-tuning
There are 48 repositories under parameter-efficient-tuning topic.
adapter-hub/adapters
A Unified Library for Parameter-Efficient and Modular Transfer Learning
ttengwang/Awesome_Prompting_Papers_in_Computer_Vision
A curated list of prompt-based paper in computer vision and vision-language learning.
NVlabs/DoRA
[ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation
jianghaojun/Awesome-Parameter-Efficient-Transfer-Learning
A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.
HenryHZY/Awesome-Multimodal-LLM
Research Trends in LLM-guided Multimodal Learning.
calpt/awesome-adapter-resources
Collection of Tools and Papers related to Adapters / Parameter-Efficient Transfer Learning/ Fine-Tuning
JieShibo/PETL-ViT
[ICCV 2023 & AAAI 2023] Binary Adapters & FacT, [Tech report] Convpass
ZO-Bench/ZO-LLM
[ICML 2024] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".
changdaeoh/BlackVIP
Official implementation for CVPR'23 paper "BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning"
eric-ai-lab/PEViT
Official implementation of AAAI 2023 paper "Parameter-efficient Model Adaptation for Vision Transformers"
thunlp/Prompt-Transferability
On Transferability of Prompt Tuning for Natural Language Processing
ZhengxiangShi/DePT
[ICLR 2024] This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"
WillDreamer/Aurora
[NeurIPS2023] Parameter-efficient Tuning of Large-scale Multimodal Foundation Model
Paranioar/UniPT
[CVPR2024] The code of "UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and Memory"
morningmoni/UniPELT
Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022
LeapLabTHU/Cross-Modal-Adapter
[Pattern Recognition 2025] Cross-Modal Adapter for Vision-Language Retrieval
zhangyikaii/LAMDA-ZhiJian
ZhiJian: A Unifying and Rapidly Deployable Toolbox for Pre-trained Model Reuse
knightyxp/DGL
[AAAI 2024] DGL: Dynamic Global-Local Prompt Tuning for Text-Video Retrieval.
OSU-MLB/ViT_PEFT_Vision
[CVPR'25 (Highlight)] Lessons and Insights from a Unifying Study of Parameter-Efficient Fine-Tuning (PEFT) in Visual Recognition
bighuang624/VoP
[CVPR 2023] VoP: Text-Video Co-operative Prompt Tuning for Cross-Modal Retrieval
jaisidhsingh/LoRA-CLIP
Easy wrapper for inserting LoRA layers in CLIP.
ImKeTT/AdaVAE
[Preprint] AdaVAE: Exploring Adaptive GPT-2s in VAEs for Language Modeling PyTorch Implementation
westlake-repl/Adapter4Rec
Multi-domain Recommendation with Adapter Tuning
mlvlab/ProMetaR
Official implementation of CVPR 2024 paper "Prompt Learning via Meta-Regularization".
auniquesun/PPT
[ICRA 2024] Official Implementation of the paper "Parameter-efficient Prompt Learning for 3D Point Cloud Understanding"
siyi-wind/AViT
[MICCAI ISIC Workshop 2023 (best paper)] AViT: Adapting Vision Transformers for Small Skin Lesion Segmentation Datasets (an official implementation)
adarobustness/adaptation_robustness
Evaluate robustness of adaptation methods on large vision-language models
Allen0307/AdapterBias
Code for the Findings of NAACL 2022(Long Paper): AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks
danelpeng/Awesome-Continual-Leaning-with-PTMs
This is a curated list of "Continual Learning with Pretrained Models" research.
yunqing-me/AdAM
[NeurIPS-2022] Annual Conference on Neural Information Processing Systems
gauss5930/AlpaGasus2-QLoRA
This is AlpaGasus2-QLoRA based on LLaMA2 with AlpaGasus mechanism using QLoRA!
WHU-ZQH/PANDA
PANDA: Prompt Transfer Meets Knowledge Distillation for Efficient Model Adaptation
daskol/lotr
Low Tensor Rank adaptation of large language models
declare-lab/domadapter
Code for EACL'23 paper "Udapter: Efficient Domain Adaptation Using Adapters"
pha123661/NTU-2022Fall-ADL
Applied Deep Learning 深度學習之應用 by Vivian Chen 陳縕儂 at NTU CSIE
louisc-s/QLoRA-Fine-tuning-for-Film-Character-Styled-Responses-from-LLM
Code for fine-tuning Llama2 LLM with custom text dataset to produce film character styled responses