/Fine-Tuning-LLaMA-2-with-QLORA-and-PEFT

This project enhances the LLaMA-2 model using Quantized Low-Rank Adaptation (QLoRA) and other parameter-efficient fine-tuning techniques to optimize its performance for specific NLP tasks. The improved model is demonstrated through a Streamlit application, showcasing its capabilities in real-time interactive settings.

Primary LanguageJupyter Notebook

No issues in this repository yet.