/LLaMA-8bit-LoRA

Repository for Chat LLaMA - training a LoRA for the LLaMA (1 or 2) models on HuggingFace with 8-bit or 4-bit quantization. Research only.

Primary LanguagePython

Chat LLaMA

8bit-LoRA or 4bit-LoRA

Repository for training a LoRA for the LLaMA (1 and 2) models on HuggingFace with 8-bit or 4-bit quantization. Research only for LLaMA 1, LLaMA 2 is open commercially.


👉 Join our Discord Server for updates, support & collaboration


Dataset creation, training, weight merging, and quantization instructions are in the docs.

Check out our trained LoRAs on HuggingFace

Anthropic's HH