leehanchung/lora-instruct
Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA
PythonApache-2.0
Issues
- 0
Falcon -7B training loss not reducing
#14 opened by pcvishak - 0
Can you please release the inference code?
#12 opened by pcvishak - 0
Support for mpt-30b
#11 opened by creatorrr - 0
Support for QLORA
#10 opened by louisoutin - 2
Error message when training MPT-7B
#7 opened by jianchaoji - 0
- 0
Can this codebase be applicable for finetuning larger models, e.g., falcon-40b?
#8 opened by ZeroYuHuang - 2
Training on Colab possible?
#1 opened by CypherpunkSamurai - 1
Error message during training
#4 opened by ChinJY - 1
MPTForCausalLM.forward() got an unexpected keyword argument 'inputs_embeds'
#2 opened by bhupendrasalesken