rbhatia46/llama2-finetune-finance-alpaca-colab
Fine tuning a LLaMA 2 model on Finance Alpaca using 4/8 bit quantization, easily feasible on Colab.
Python
Fine tuning a LLaMA 2 model on Finance Alpaca using 4/8 bit quantization, easily feasible on Colab.
Python