NetEase-FuXi/EETQ

Qlora with eetq is quite slow

hjh0119 opened this issue · 3 comments

The training process is quite slow, whereas using 8-bit hqq speeds it up by more than tenfold. Is this normal? Or have I missed any code?

import torch
from transformers import EetqConfig, AutoModelForCausalLM

config = EetqConfig("int8")

# from transformers import HqqConfig
# HqqConfig(nbits=8)

model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1", quantization_config=config)

# train...

Sorry for your trouble. The backward propagation process of EETQ has not been fully optimized.

got it. Is the optimization in schedule?

Not yet.