Qlora with eetq is quite slow
hjh0119 opened this issue · 3 comments
hjh0119 commented
The training process is quite slow, whereas using 8-bit hqq speeds it up by more than tenfold. Is this normal? Or have I missed any code?
import torch
from transformers import EetqConfig, AutoModelForCausalLM
config = EetqConfig("int8")
# from transformers import HqqConfig
# HqqConfig(nbits=8)
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1", quantization_config=config)
# train...
dtlzhuangz commented
Sorry for your trouble. The backward propagation process of EETQ has not been fully optimized.
hjh0119 commented
got it. Is the optimization in schedule?
dtlzhuangz commented
Not yet.