/finetuning-Llama2-7b

Fine-tuning the LLaMA 2-7B model (7 billion parameters) using QLoRA technique on a T4 GPU with 16 GB of VRAM.

Primary LanguageJupyter Notebook

Watchers