/Alpaca-3B-Fine-Tuned

In this project, I have provided code and a Colaboratory notebook that facilitates the fine-tuning process of an Alpaca 3B parameter model originally developed at Stanford University. The model was adapted using LoRA to run with fewer computational resources and training parameters and used HuggingFace's PEFT library.

Primary LanguageJupyter NotebookMIT LicenseMIT

No issues in this repository yet.