parameter-efficient-fine-tuning

There are 31 repositories under parameter-efficient-fine-tuning topic.

  • NVlabs/DoRA

    [ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation

    Language:Python658111842
  • synbol/Awesome-Parameter-Efficient-Transfer-Learning

    Collection of awesome parameter-efficient fine-tuning resources.

  • Paranioar/Awesome_Matching_Pretraining_Transfering

    The Paper List of Large Multi-Modality Model, Parameter-Efficient Finetuning, Vision-Language Pretraining, Conventional Image-Text Matching for Preliminary Insight.

  • liuqidong07/MOELoRA-peft

    [SIGIR'24] The official implementation code of MOELoRA.

    Language:Python13232217
  • Chongjie-Si/Subspace-Tuning

    A generalized framework for subspace tuning methods in parameter efficient fine-tuning.

    Language:Python110568
  • ZhengxiangShi/DePT

    [ICLR 2024] This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"

    Language:Python9421015
  • Paranioar/UniPT

    [CVPR2024] The code of "UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and Memory"

    Language:Python65151
  • ziplab/SPT

    [ICCV 2023 oral] This is the official repository for our paper: ''Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning''.

    Language:Python64492
  • architkaila/Fine-Tuning-LLMs-for-Medical-Entity-Extraction

    Exploring the potential of fine-tuning Large Language Models (LLMs) like Llama2 and StableLM for medical entity extraction. This project focuses on adapting these models using PEFT, Adapter V2, and LoRA techniques to efficiently and accurately extract drug names and adverse side-effects from pharmaceutical texts

    Language:Python61305
  • miccunifi/KDPL

    [ECCV 2024] - Improving Zero-shot Generalization of Learned Prompts via Unsupervised Knowledge Distillation

    Language:Python481021
  • astra-vision/FAMix

    [CVPR 2024] Official repository of "A Simple Recipe for Language-guided Domain Generalized Segmentation"

    Language:Python42291
  • iboing/CorDA

    CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for task-aware parameter-efficient fine-tuning(NeurIPS 2024)

    Language:Python37311
  • umbertocappellazzo/PETL_AST

    This is the official repository of the papers "Parameter-Efficient Transfer Learning of Audio Spectrogram Transformers" and "Efficient Fine-tuning of Audio Spectrogram Transformers via Soft Mixture of Adapters".

    Language:Python37613
  • OSU-MLB/PETL_Vision

    Lessons Learned from a Unifying Empirical Study of Parameter-Efficient Transfer Learning (PETL) in Visual Recognition

    Language:Jupyter Notebook26200
  • auniquesun/PPT

    [ICRA 2024] Official Implementation of the Paper "Parameter-efficient Prompt Learning for 3D Point Cloud Understanding"

    Language:Jupyter Notebook19215
  • PurdueDigitalTwin/MACP

    [WACV 2024] MACP: Efficient Model Adaptation for Cooperative Perception.

    Language:Python15021
  • astra-vision/ProLIP

    Fine-tuning CLIP's Last Visual Projector: A Few-Shot Cornucopia

  • alinourian/Fine-tuning-Mistral-7b-QA

    Fine tuning Mistral-7b with PEFT(Parameter Efficient Fine-Tuning) and LoRA(Low-Rank Adaptation) on Puffin Dataset(multi-turn conversations between GPT-4 and real humans)

    Language:Jupyter Notebook12100
  • fredzzhang/atlas

    Official PyTorch implementation for NeurIPS'24 paper "Knowledge Composition using Task Vectors with Learned Anisotropic Scaling"

    Language:Python120
  • rochitasundar/Generative-AI-with-Large-Language-Models

    This repository contains the lab work for Coursera course on "Generative AI with Large Language Models".

    Language:Jupyter Notebook10103
  • cityuhkai/SBoRA

    Language:Python9400
  • GeorgeVern/lmcor

    Code for the EACL 2024 paper: "Small Language Models Improve Giants by Rewriting Their Outputs"

    Language:Python8231
  • CASE-Lab-UMD/Router-Tuning-Mixture-of-Depths

    The open-source Mixture of Depths code and the official implementation of the paper "Router-Tuning: A Simple and Effective Approach for Enabling Dynamic Depth in Transformers."

    Language:Python7312
  • Raman1121/FairTune

    A framework to optimize Parameter-Efficient Fine-Tuning for Fairness in Medical Image Analysis

    Language:Python7301
  • fork123aniket/LLM-RAG-powered-QA-App

    A Production-Ready, Scalable RAG-powered LLM-based Context-Aware QA App

    Language:Python5101
  • Paranioar/SHERL

    [ECCV2024] The code of "SHERL: Synthesizing High Accuracy and Efficient Memory for Resource-Limited Transfer Learning"

    Language:Python4100
  • Andy-LZH/peft4clip

    Parameter Efficient Fine-Tuning for CLIP

    Language:Python3201
  • ssfgunner/SNELL

    [NeurIPS 2024] This is the official repository for our paper: ''Expanding Sparse Tuning for Low Memory Usage''.

    Language:Python3210
  • giuseppedipoce/Task-Arithmetic-Tuning-of-MobileNetV2-

    This repository contain a project which goal is to find new parameter efficient fine tuning framework in order to improve performance of Deep Artificial Neural Network onto "out of distribution" data (OOD). In this specific case you can find Multi-task Learning problem.

    Language:Jupyter Notebook0100
  • UniFAQ

    KayvanShah1/UniFAQ

    Fine-Tuned LLM-Based FAQ Generation for University Admissions: A project involving the fine-tuning of state-of-the-art language models, including LLaMA-3 8b, LLaMA-2 7b, Mistral 7b, T5, and BART, leveraging QLoRA PEFT.

    Language:Jupyter Notebook0100
  • qiqinyi/GenAI-with-LLMs

    My lab work of “Generative AI with Large Language Models” course offered by DeepLearning.AI and Amazon Web Services on coursera.

    Language:Jupyter Notebook0100