/llama2-finetune-finance-alpaca-colab

Fine tuning a LLaMA 2 model on Finance Alpaca using 4/8 bit quantization, easily feasible on Colab.

Primary LanguagePython

Watchers