/Gemma-7b-ft-QLoRA-300-Alpaca

This project fine-tunes Google's Gemma-7B LLM using the Alpaca dataset (300 rows).

Primary LanguageJupyter Notebook

Gemma-7b-ft-QLoRA-300-Alpaca

This project involves fine-tuning Google's Gemma-7B LLM. The dataset used for this fine-tuning is the Alpaca cleaned dataset originally released by Stanford University. Due to resource limitations, only 300 rows from the dataset were used for fine-tuning.

After fine-tuning, the model was pushed/saved to the Hugging Face hub, which can be viewed here.