LLaMA 2 support
Opened this issue · 1 comments
giyaseddin commented
Hey team, thank you for the great work on Bactrian X.
Are the trained lora weights shared in this work support the second version of Llama 2?
If not, how much resource is required to retrain it on the new model variations?
haonan-li commented
Hi, Sorry for the late reply, we currently do not have plan to train on Llama-2, our next step is to improve the data quality by removing the low-quality data.
Train a single lora adaptor for 7b model (1 language) can be done on 1x40GB A100, within 12 hours.