/Fine-Tune-LLAMA-2-With-Custom-Dataset-Using-LoRA-And-QLoRA-Techniques

This repository provides a comprehensive guide and implementation for fine-tuning the LLAMA 2 language model using custom datasets. By using Low-Rank Adaptation (LoRA) and Quantized Low-Rank Adaptation (QLoRA), it enables efficient and scalable model fine-tuning, making it suitable for resource-limited environments.

Primary LanguageJupyter NotebookMIT LicenseMIT

Stargazers

No one’s star this repository yet.