/Alpaca-350M-Fine-Tuned

In this project, I have provided code and a Colaboratory notebook that facilitates the fine-tuning process of an Alpaca 350M parameter model originally developed at Stanford University. The model was adapted using LoRA to run with fewer computational resources and training parameters and used HuggingFace's PEFT library.

Primary LanguageJupyter NotebookMIT LicenseMIT

Watchers