Pinned Repositories
LLM-Adapters
Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"
parler-tts
Inference and training library for high-quality TTS models.
peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
LLM-evaluation-datasets
LOMO
LOMO: LOw-Memory Optimization
ChatGLM2-SFT
ChatGLM2-6B微调, SFT/LoRA, instruction finetune