Qlb6x's Stars
UKPLab/sentence-transformers
State-of-the-Art Text Embeddings
lmcinnes/umap
Uniform Manifold Approximation and Projection
xialeiliu/Awesome-Incremental-Learning
Awesome Incremental Learning
km1994/LLMs_interview_notes
该仓库主要记录 大模型(LLMs) 算法工程师相关的面试题
RahulSChand/gpu_poor
Calculate token/s & GPU memory requirement for any LLM. Supports llama.cpp/ggml/bnb/QLoRA quantization
NVlabs/DoRA
[ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation
sail-sg/lorahub
[COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition
BeyonderXX/InstructUIE
Universal information extraction with instruction learning
sabetAI/BLoRA
batched loras
gstoica27/ZipIt
A framework for merging models solving different tasks with different initializations into one multi-task model without any additional training
QingruZhang/AdaLoRA
AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).
Ablustrund/LoRAMoE
LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment
cmnfriend/O-LoRA
AGI-Edgerunners/LLM-Continual-Learning-Papers
Must-read Papers on Large Language Model (LLM) Continual Learning
liuqidong07/MOELoRA-peft
[SIGIR'24] The official implementation code of MOELoRA.
microsoft/mttl
Building modular LMs with parameter-efficient fine-tuning.
Qlb6x/DiffusionABSA
LREC-COLING 2024: DiffusionABSA: Let’s Rectify Step by Step: Improving Aspect-based Sentiment Analysis with Diffusion Models
abenhamadou/Self-Supervised-Endoscopic-Image-Key-Points-Matching
jb-01/LoRA-TLE
Token-level adaptation of LoRA matrices for downstream task generalization.