qwopqwop200/GPTQ-for-LLaMa

How to quantize bloom after lora/ptuning?

moonlightian opened this issue · 0 comments

I finetuned bloom with loar and would like to quantize the model with GPTQ,
self.model = AutoModelForCausalLM.from_pretrained( self.config['checkpoint_path'], device_map='auto', ) #load adpater self.model = PeftModelForCausalLM.from_pretrained(self.model, '/tmp/bloom_ori/lora_bloom')
some errors happened like:
image
It seems that after loading adapter, there are dimension error between alibi and attention_mask. How could I get rid of these bugs and quantize model with adapter?