declare-lab/flan-alpaca
This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as Flan-T5.
PythonApache-2.0
Issues
- 1
- 1
- 3
Loss value is NaN
#19 opened by liuqingli - 10
Unable to train on 4-5 gtx 1070s
#17 opened by sfxworks - 2
- 1
OMP: Error #100: Fatal system error detected.
#16 opened by sfxworks - 1
I dont see any progress logs
#6 opened by allthingssecurity - 2
Trouble training
#5 opened by KurtFeynmanGodel - 1
- 1
- 1
Quantized
#2 opened by naturallydeer - 2
Usage example in readme doesn't work
#18 opened by sfxworks - 6
Performance on MMLU
#13 opened by allanj - 2
Commercial Use?
#3 opened by avb-is-me - 3
wget https://raw.githubusercontent.com/tloen/alpaca-lora/main/alpaca_data_cleaned.json -O data/alpaca_clean.json
#15 opened by kizombaciao - 2
- 5
- 2
LoRA + FSDP -- issue
#10 opened by ngun7 - 2
- 6
is there any plan for flan-ul2?
#9 opened by nonkung51 - 4
use gpt4all dataset
#7 opened by Shiro836