/gemma-Instruct-2b-Finetuning-on-alpaca

This project demonstrates the steps required to fine-tune the Gemma model for tasks like code generation. We use qLora quantization to reduce memory usage and the SFTTrainer from the trl library for supervised fine-tuning.

Primary LanguageJupyter NotebookMIT LicenseMIT

Stargazers

No one’s star this repository yet.