Pinned Repositories
AdaLoRA
AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).
KD-NLP
LLMforDialogDataGenerate
Generate dialog data from documents using LLM like ChatGLM2 or ChatGPT;利用ChatGLM2,ChatGPT等大模型根据文档生成对话数据集
LoRA
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
qa-lora
Official PyTorch implementation of QA-LoRA
qlora
QLoRA: Efficient Finetuning of Quantized LLMs
relora
Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates
SoRA
The source code of the EMNLP 2023 main conference paper: Sparse Low-rank Adaptation of Pre-trained Language Models.
svd-lorafa
Final project for course "Numerical Linear Algebra". Speeding up LoRA-FA method with reasonable initialization
huangguoliang1108's Repositories
huangguoliang1108/AdaLoRA
AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).
huangguoliang1108/KD-NLP
huangguoliang1108/LLMforDialogDataGenerate
Generate dialog data from documents using LLM like ChatGLM2 or ChatGPT;利用ChatGLM2,ChatGPT等大模型根据文档生成对话数据集
huangguoliang1108/LoRA
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
huangguoliang1108/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
huangguoliang1108/qa-lora
Official PyTorch implementation of QA-LoRA
huangguoliang1108/qlora
QLoRA: Efficient Finetuning of Quantized LLMs
huangguoliang1108/relora
Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates
huangguoliang1108/SoRA
The source code of the EMNLP 2023 main conference paper: Sparse Low-rank Adaptation of Pre-trained Language Models.
huangguoliang1108/svd-lorafa
Final project for course "Numerical Linear Algebra". Speeding up LoRA-FA method with reasonable initialization