shashank140195/finetune_Llama2_LORA
This repository contains the work to finetune Llama-2 7B HF model using LoRA on 1 A100 40GB GPU
Python
Issues
- 4
4bit or 8bit quantization?
#1 opened by Tizzzzy
This repository contains the work to finetune Llama-2 7B HF model using LoRA on 1 A100 40GB GPU
Python